Test Report: KVM_Linux_crio 18375

                    
                      71179286cc00ab66370748dfc329f8d30a1d24a0:2024-03-14:33556
                    
                

Test fail (31/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.75
53 TestAddons/StoppedEnableDisable 154.48
129 TestFunctional/parallel/ImageCommands/ImageBuild 7.31
172 TestMutliControlPlane/serial/StopSecondaryNode 142.15
174 TestMutliControlPlane/serial/RestartSecondaryNode 55.43
176 TestMutliControlPlane/serial/RestartClusterKeepsNodes 377.78
177 TestMutliControlPlane/serial/DeleteSecondaryNode 63.89
179 TestMutliControlPlane/serial/StopCluster 142.08
239 TestMultiNode/serial/RestartKeepsNodes 307.79
241 TestMultiNode/serial/StopMultiNode 141.52
248 TestPreload 302.33
256 TestKubernetesUpgrade 372.51
284 TestPause/serial/SecondStartNoReconfiguration 64.71
322 TestStartStop/group/old-k8s-version/serial/FirstStart 306.63
347 TestStartStop/group/embed-certs/serial/Stop 139.02
350 TestStartStop/group/no-preload/serial/Stop 139.05
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.13
354 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.86
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
364 TestStartStop/group/old-k8s-version/serial/SecondStart 754.14
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.37
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.26
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.4
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.5
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 404
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 463.99
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 382.83
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 95.34
x
+
TestAddons/parallel/Ingress (154.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-524943 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-524943 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-524943 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9cbd44b7-07b2-4686-8df2-24235a9fafde] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9cbd44b7-07b2-4686-8df2-24235a9fafde] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005688004s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-524943 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.266602686s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-524943 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.37
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 addons disable ingress-dns --alsologtostderr -v=1: (1.374267182s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 addons disable ingress --alsologtostderr -v=1: (7.99906575s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-524943 -n addons-524943
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 logs -n 25: (1.317703128s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-312826                                                                     | download-only-312826 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-628793                                                                     | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-690080                                                                     | download-only-690080 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-312826                                                                     | download-only-312826 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-474267 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | binary-mirror-474267                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40817                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-474267                                                                     | binary-mirror-474267 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | addons-524943                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | addons-524943                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-524943 --wait=true                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-524943 addons                                                                        | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:29 UTC | 13 Mar 24 23:29 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-524943 addons disable                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:29 UTC | 13 Mar 24 23:29 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:29 UTC | 13 Mar 24 23:30 UTC |
	|         | addons-524943                                                                               |                      |         |         |                     |                     |
	| ip      | addons-524943 ip                                                                            | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| addons  | addons-524943 addons disable                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-524943 ssh curl -s                                                                   | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | -p addons-524943                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-524943 ssh cat                                                                       | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | /opt/local-path-provisioner/pvc-73906157-3eeb-4425-bdc1-b9ef4702f661_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-524943 addons disable                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | addons-524943                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	|         | -p addons-524943                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-524943 addons                                                                        | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC | 13 Mar 24 23:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-524943 addons                                                                        | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC | 13 Mar 24 23:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-524943 ip                                                                            | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:32 UTC | 13 Mar 24 23:32 UTC |
	| addons  | addons-524943 addons disable                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:32 UTC | 13 Mar 24 23:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-524943 addons disable                                                                | addons-524943        | jenkins | v1.32.0 | 13 Mar 24 23:32 UTC | 13 Mar 24 23:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:27:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:27:10.763960   13081 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:27:10.764071   13081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:27:10.764075   13081 out.go:304] Setting ErrFile to fd 2...
	I0313 23:27:10.764079   13081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:27:10.764287   13081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:27:10.764882   13081 out.go:298] Setting JSON to false
	I0313 23:27:10.765653   13081 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":574,"bootTime":1710371857,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:27:10.765710   13081 start.go:139] virtualization: kvm guest
	I0313 23:27:10.768809   13081 out.go:177] * [addons-524943] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:27:10.770392   13081 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:27:10.770383   13081 notify.go:220] Checking for updates...
	I0313 23:27:10.772171   13081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:27:10.773660   13081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:27:10.775134   13081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:27:10.776585   13081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:27:10.778107   13081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:27:10.779560   13081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:27:10.810507   13081 out.go:177] * Using the kvm2 driver based on user configuration
	I0313 23:27:10.811952   13081 start.go:297] selected driver: kvm2
	I0313 23:27:10.811978   13081 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:27:10.811996   13081 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:27:10.812712   13081 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:27:10.812816   13081 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:27:10.827268   13081 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:27:10.827319   13081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:27:10.827585   13081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:27:10.827619   13081 cni.go:84] Creating CNI manager for ""
	I0313 23:27:10.827630   13081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:27:10.827642   13081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:27:10.827704   13081 start.go:340] cluster config:
	{Name:addons-524943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-524943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:27:10.827809   13081 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:27:10.829745   13081 out.go:177] * Starting "addons-524943" primary control-plane node in "addons-524943" cluster
	I0313 23:27:10.831236   13081 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:27:10.831273   13081 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:27:10.831283   13081 cache.go:56] Caching tarball of preloaded images
	I0313 23:27:10.831371   13081 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:27:10.831383   13081 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:27:10.831683   13081 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/config.json ...
	I0313 23:27:10.831708   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/config.json: {Name:mk0e697db46178c150f0a8040666e3973cc36841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:10.831846   13081 start.go:360] acquireMachinesLock for addons-524943: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:27:10.831906   13081 start.go:364] duration metric: took 44.485µs to acquireMachinesLock for "addons-524943"
	I0313 23:27:10.831937   13081 start.go:93] Provisioning new machine with config: &{Name:addons-524943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-524943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:27:10.832011   13081 start.go:125] createHost starting for "" (driver="kvm2")
	I0313 23:27:10.833931   13081 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0313 23:27:10.834078   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:27:10.834112   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:27:10.847875   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0313 23:27:10.848347   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:27:10.848907   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:27:10.848923   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:27:10.849260   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:27:10.849462   13081 main.go:141] libmachine: (addons-524943) Calling .GetMachineName
	I0313 23:27:10.849633   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:10.849800   13081 start.go:159] libmachine.API.Create for "addons-524943" (driver="kvm2")
	I0313 23:27:10.849828   13081 client.go:168] LocalClient.Create starting
	I0313 23:27:10.849860   13081 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:27:11.002157   13081 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:27:11.338865   13081 main.go:141] libmachine: Running pre-create checks...
	I0313 23:27:11.338883   13081 main.go:141] libmachine: (addons-524943) Calling .PreCreateCheck
	I0313 23:27:11.339425   13081 main.go:141] libmachine: (addons-524943) Calling .GetConfigRaw
	I0313 23:27:11.340687   13081 main.go:141] libmachine: Creating machine...
	I0313 23:27:11.340701   13081 main.go:141] libmachine: (addons-524943) Calling .Create
	I0313 23:27:11.340901   13081 main.go:141] libmachine: (addons-524943) Creating KVM machine...
	I0313 23:27:11.342217   13081 main.go:141] libmachine: (addons-524943) DBG | found existing default KVM network
	I0313 23:27:11.343106   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:11.342944   13103 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0313 23:27:11.343177   13081 main.go:141] libmachine: (addons-524943) DBG | created network xml: 
	I0313 23:27:11.343198   13081 main.go:141] libmachine: (addons-524943) DBG | <network>
	I0313 23:27:11.343205   13081 main.go:141] libmachine: (addons-524943) DBG |   <name>mk-addons-524943</name>
	I0313 23:27:11.343211   13081 main.go:141] libmachine: (addons-524943) DBG |   <dns enable='no'/>
	I0313 23:27:11.343220   13081 main.go:141] libmachine: (addons-524943) DBG |   
	I0313 23:27:11.343226   13081 main.go:141] libmachine: (addons-524943) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0313 23:27:11.343246   13081 main.go:141] libmachine: (addons-524943) DBG |     <dhcp>
	I0313 23:27:11.343259   13081 main.go:141] libmachine: (addons-524943) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0313 23:27:11.343269   13081 main.go:141] libmachine: (addons-524943) DBG |     </dhcp>
	I0313 23:27:11.343284   13081 main.go:141] libmachine: (addons-524943) DBG |   </ip>
	I0313 23:27:11.343328   13081 main.go:141] libmachine: (addons-524943) DBG |   
	I0313 23:27:11.343356   13081 main.go:141] libmachine: (addons-524943) DBG | </network>
	I0313 23:27:11.343368   13081 main.go:141] libmachine: (addons-524943) DBG | 
	I0313 23:27:11.348493   13081 main.go:141] libmachine: (addons-524943) DBG | trying to create private KVM network mk-addons-524943 192.168.39.0/24...
	I0313 23:27:11.417404   13081 main.go:141] libmachine: (addons-524943) DBG | private KVM network mk-addons-524943 192.168.39.0/24 created
	I0313 23:27:11.417441   13081 main.go:141] libmachine: (addons-524943) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943 ...
	I0313 23:27:11.417464   13081 main.go:141] libmachine: (addons-524943) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:27:11.417481   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:11.417382   13103 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:27:11.417505   13081 main.go:141] libmachine: (addons-524943) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:27:11.654810   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:11.654605   13103 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa...
	I0313 23:27:11.821454   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:11.821303   13103 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/addons-524943.rawdisk...
	I0313 23:27:11.821477   13081 main.go:141] libmachine: (addons-524943) DBG | Writing magic tar header
	I0313 23:27:11.821486   13081 main.go:141] libmachine: (addons-524943) DBG | Writing SSH key tar header
	I0313 23:27:11.821494   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:11.821417   13103 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943 ...
	I0313 23:27:11.821507   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943
	I0313 23:27:11.821581   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:27:11.821591   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:27:11.821600   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:27:11.821608   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943 (perms=drwx------)
	I0313 23:27:11.821617   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:27:11.821623   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:27:11.821629   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:27:11.821637   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:27:11.821642   13081 main.go:141] libmachine: (addons-524943) DBG | Checking permissions on dir: /home
	I0313 23:27:11.821652   13081 main.go:141] libmachine: (addons-524943) DBG | Skipping /home - not owner
	I0313 23:27:11.821661   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:27:11.821667   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:27:11.821674   13081 main.go:141] libmachine: (addons-524943) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:27:11.821694   13081 main.go:141] libmachine: (addons-524943) Creating domain...
	I0313 23:27:11.822891   13081 main.go:141] libmachine: (addons-524943) define libvirt domain using xml: 
	I0313 23:27:11.822923   13081 main.go:141] libmachine: (addons-524943) <domain type='kvm'>
	I0313 23:27:11.822930   13081 main.go:141] libmachine: (addons-524943)   <name>addons-524943</name>
	I0313 23:27:11.822935   13081 main.go:141] libmachine: (addons-524943)   <memory unit='MiB'>4000</memory>
	I0313 23:27:11.822941   13081 main.go:141] libmachine: (addons-524943)   <vcpu>2</vcpu>
	I0313 23:27:11.822949   13081 main.go:141] libmachine: (addons-524943)   <features>
	I0313 23:27:11.822957   13081 main.go:141] libmachine: (addons-524943)     <acpi/>
	I0313 23:27:11.822968   13081 main.go:141] libmachine: (addons-524943)     <apic/>
	I0313 23:27:11.822977   13081 main.go:141] libmachine: (addons-524943)     <pae/>
	I0313 23:27:11.822987   13081 main.go:141] libmachine: (addons-524943)     
	I0313 23:27:11.822998   13081 main.go:141] libmachine: (addons-524943)   </features>
	I0313 23:27:11.823008   13081 main.go:141] libmachine: (addons-524943)   <cpu mode='host-passthrough'>
	I0313 23:27:11.823016   13081 main.go:141] libmachine: (addons-524943)   
	I0313 23:27:11.823027   13081 main.go:141] libmachine: (addons-524943)   </cpu>
	I0313 23:27:11.823035   13081 main.go:141] libmachine: (addons-524943)   <os>
	I0313 23:27:11.823047   13081 main.go:141] libmachine: (addons-524943)     <type>hvm</type>
	I0313 23:27:11.823081   13081 main.go:141] libmachine: (addons-524943)     <boot dev='cdrom'/>
	I0313 23:27:11.823106   13081 main.go:141] libmachine: (addons-524943)     <boot dev='hd'/>
	I0313 23:27:11.823127   13081 main.go:141] libmachine: (addons-524943)     <bootmenu enable='no'/>
	I0313 23:27:11.823145   13081 main.go:141] libmachine: (addons-524943)   </os>
	I0313 23:27:11.823157   13081 main.go:141] libmachine: (addons-524943)   <devices>
	I0313 23:27:11.823168   13081 main.go:141] libmachine: (addons-524943)     <disk type='file' device='cdrom'>
	I0313 23:27:11.823192   13081 main.go:141] libmachine: (addons-524943)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/boot2docker.iso'/>
	I0313 23:27:11.823201   13081 main.go:141] libmachine: (addons-524943)       <target dev='hdc' bus='scsi'/>
	I0313 23:27:11.823206   13081 main.go:141] libmachine: (addons-524943)       <readonly/>
	I0313 23:27:11.823213   13081 main.go:141] libmachine: (addons-524943)     </disk>
	I0313 23:27:11.823219   13081 main.go:141] libmachine: (addons-524943)     <disk type='file' device='disk'>
	I0313 23:27:11.823231   13081 main.go:141] libmachine: (addons-524943)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:27:11.823244   13081 main.go:141] libmachine: (addons-524943)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/addons-524943.rawdisk'/>
	I0313 23:27:11.823251   13081 main.go:141] libmachine: (addons-524943)       <target dev='hda' bus='virtio'/>
	I0313 23:27:11.823263   13081 main.go:141] libmachine: (addons-524943)     </disk>
	I0313 23:27:11.823271   13081 main.go:141] libmachine: (addons-524943)     <interface type='network'>
	I0313 23:27:11.823281   13081 main.go:141] libmachine: (addons-524943)       <source network='mk-addons-524943'/>
	I0313 23:27:11.823288   13081 main.go:141] libmachine: (addons-524943)       <model type='virtio'/>
	I0313 23:27:11.823293   13081 main.go:141] libmachine: (addons-524943)     </interface>
	I0313 23:27:11.823303   13081 main.go:141] libmachine: (addons-524943)     <interface type='network'>
	I0313 23:27:11.823321   13081 main.go:141] libmachine: (addons-524943)       <source network='default'/>
	I0313 23:27:11.823334   13081 main.go:141] libmachine: (addons-524943)       <model type='virtio'/>
	I0313 23:27:11.823347   13081 main.go:141] libmachine: (addons-524943)     </interface>
	I0313 23:27:11.823357   13081 main.go:141] libmachine: (addons-524943)     <serial type='pty'>
	I0313 23:27:11.823368   13081 main.go:141] libmachine: (addons-524943)       <target port='0'/>
	I0313 23:27:11.823378   13081 main.go:141] libmachine: (addons-524943)     </serial>
	I0313 23:27:11.823391   13081 main.go:141] libmachine: (addons-524943)     <console type='pty'>
	I0313 23:27:11.823411   13081 main.go:141] libmachine: (addons-524943)       <target type='serial' port='0'/>
	I0313 23:27:11.823423   13081 main.go:141] libmachine: (addons-524943)     </console>
	I0313 23:27:11.823434   13081 main.go:141] libmachine: (addons-524943)     <rng model='virtio'>
	I0313 23:27:11.823448   13081 main.go:141] libmachine: (addons-524943)       <backend model='random'>/dev/random</backend>
	I0313 23:27:11.823458   13081 main.go:141] libmachine: (addons-524943)     </rng>
	I0313 23:27:11.823466   13081 main.go:141] libmachine: (addons-524943)     
	I0313 23:27:11.823479   13081 main.go:141] libmachine: (addons-524943)     
	I0313 23:27:11.823492   13081 main.go:141] libmachine: (addons-524943)   </devices>
	I0313 23:27:11.823502   13081 main.go:141] libmachine: (addons-524943) </domain>
	I0313 23:27:11.823514   13081 main.go:141] libmachine: (addons-524943) 
	I0313 23:27:11.830124   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:ae:f8:10 in network default
	I0313 23:27:11.830699   13081 main.go:141] libmachine: (addons-524943) Ensuring networks are active...
	I0313 23:27:11.830728   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:11.831507   13081 main.go:141] libmachine: (addons-524943) Ensuring network default is active
	I0313 23:27:11.831879   13081 main.go:141] libmachine: (addons-524943) Ensuring network mk-addons-524943 is active
	I0313 23:27:11.832341   13081 main.go:141] libmachine: (addons-524943) Getting domain xml...
	I0313 23:27:11.832983   13081 main.go:141] libmachine: (addons-524943) Creating domain...
	I0313 23:27:13.224281   13081 main.go:141] libmachine: (addons-524943) Waiting to get IP...
	I0313 23:27:13.225237   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:13.225712   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:13.225742   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:13.225696   13103 retry.go:31] will retry after 267.214793ms: waiting for machine to come up
	I0313 23:27:13.494205   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:13.494625   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:13.494642   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:13.494589   13103 retry.go:31] will retry after 326.553248ms: waiting for machine to come up
	I0313 23:27:13.823114   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:13.823580   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:13.823611   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:13.823535   13103 retry.go:31] will retry after 466.656496ms: waiting for machine to come up
	I0313 23:27:14.292285   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:14.292788   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:14.292818   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:14.292732   13103 retry.go:31] will retry after 394.468539ms: waiting for machine to come up
	I0313 23:27:14.689119   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:14.689627   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:14.689664   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:14.689584   13103 retry.go:31] will retry after 731.734929ms: waiting for machine to come up
	I0313 23:27:15.422560   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:15.423085   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:15.423105   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:15.422977   13103 retry.go:31] will retry after 889.992358ms: waiting for machine to come up
	I0313 23:27:16.315216   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:16.315664   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:16.315694   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:16.315611   13103 retry.go:31] will retry after 913.032307ms: waiting for machine to come up
	I0313 23:27:17.229896   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:17.230336   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:17.230364   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:17.230277   13103 retry.go:31] will retry after 1.047424138s: waiting for machine to come up
	I0313 23:27:18.279580   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:18.279946   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:18.279973   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:18.279906   13103 retry.go:31] will retry after 1.771435311s: waiting for machine to come up
	I0313 23:27:20.053944   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:20.054411   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:20.054469   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:20.054380   13103 retry.go:31] will retry after 1.920180683s: waiting for machine to come up
	I0313 23:27:21.976385   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:21.976773   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:21.976813   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:21.976738   13103 retry.go:31] will retry after 1.77183805s: waiting for machine to come up
	I0313 23:27:23.750586   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:23.750963   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:23.750991   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:23.750916   13103 retry.go:31] will retry after 3.317927819s: waiting for machine to come up
	I0313 23:27:27.070794   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:27.071215   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:27.071237   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:27.071169   13103 retry.go:31] will retry after 4.165671942s: waiting for machine to come up
	I0313 23:27:31.238486   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:31.239076   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find current IP address of domain addons-524943 in network mk-addons-524943
	I0313 23:27:31.239105   13081 main.go:141] libmachine: (addons-524943) DBG | I0313 23:27:31.239007   13103 retry.go:31] will retry after 5.335312184s: waiting for machine to come up
	I0313 23:27:36.576021   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.576565   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has current primary IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.576585   13081 main.go:141] libmachine: (addons-524943) Found IP for machine: 192.168.39.37
	I0313 23:27:36.576598   13081 main.go:141] libmachine: (addons-524943) Reserving static IP address...
	I0313 23:27:36.576986   13081 main.go:141] libmachine: (addons-524943) DBG | unable to find host DHCP lease matching {name: "addons-524943", mac: "52:54:00:de:7c:3b", ip: "192.168.39.37"} in network mk-addons-524943
	I0313 23:27:36.648150   13081 main.go:141] libmachine: (addons-524943) DBG | Getting to WaitForSSH function...
	I0313 23:27:36.648184   13081 main.go:141] libmachine: (addons-524943) Reserved static IP address: 192.168.39.37
	I0313 23:27:36.648232   13081 main.go:141] libmachine: (addons-524943) Waiting for SSH to be available...
	I0313 23:27:36.650258   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.650738   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:36.650783   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.650901   13081 main.go:141] libmachine: (addons-524943) DBG | Using SSH client type: external
	I0313 23:27:36.650924   13081 main.go:141] libmachine: (addons-524943) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa (-rw-------)
	I0313 23:27:36.650957   13081 main.go:141] libmachine: (addons-524943) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:27:36.650975   13081 main.go:141] libmachine: (addons-524943) DBG | About to run SSH command:
	I0313 23:27:36.651024   13081 main.go:141] libmachine: (addons-524943) DBG | exit 0
	I0313 23:27:36.786721   13081 main.go:141] libmachine: (addons-524943) DBG | SSH cmd err, output: <nil>: 
	I0313 23:27:36.787083   13081 main.go:141] libmachine: (addons-524943) KVM machine creation complete!
	I0313 23:27:36.787377   13081 main.go:141] libmachine: (addons-524943) Calling .GetConfigRaw
	I0313 23:27:36.787861   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:36.788070   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:36.788214   13081 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:27:36.788227   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:27:36.789603   13081 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:27:36.789617   13081 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:27:36.789623   13081 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:27:36.789629   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:36.791569   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.791886   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:36.791909   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.792061   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:36.792243   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:36.792389   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:36.792572   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:36.792726   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:36.792897   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:36.792907   13081 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:27:36.906354   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:27:36.906380   13081 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:27:36.906388   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:36.909007   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.909348   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:36.909389   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:36.909509   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:36.909691   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:36.909871   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:36.910070   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:36.910234   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:36.910417   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:36.910431   13081 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:27:37.023752   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:27:37.023818   13081 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:27:37.023825   13081 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:27:37.023832   13081 main.go:141] libmachine: (addons-524943) Calling .GetMachineName
	I0313 23:27:37.024101   13081 buildroot.go:166] provisioning hostname "addons-524943"
	I0313 23:27:37.024131   13081 main.go:141] libmachine: (addons-524943) Calling .GetMachineName
	I0313 23:27:37.024366   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.026684   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.027050   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.027071   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.027278   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:37.027435   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.027534   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.027618   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:37.027763   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:37.027927   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:37.027942   13081 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-524943 && echo "addons-524943" | sudo tee /etc/hostname
	I0313 23:27:37.159912   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-524943
	
	I0313 23:27:37.159939   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.162326   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.162665   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.162709   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.162879   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:37.163067   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.163229   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.163349   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:37.163560   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:37.163725   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:37.163741   13081 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-524943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-524943/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-524943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:27:37.284483   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:27:37.284511   13081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:27:37.284547   13081 buildroot.go:174] setting up certificates
	I0313 23:27:37.284558   13081 provision.go:84] configureAuth start
	I0313 23:27:37.284570   13081 main.go:141] libmachine: (addons-524943) Calling .GetMachineName
	I0313 23:27:37.284863   13081 main.go:141] libmachine: (addons-524943) Calling .GetIP
	I0313 23:27:37.287363   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.287676   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.287703   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.287801   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.289683   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.289940   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.289968   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.290076   13081 provision.go:143] copyHostCerts
	I0313 23:27:37.290151   13081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:27:37.290267   13081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:27:37.290357   13081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:27:37.290416   13081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.addons-524943 san=[127.0.0.1 192.168.39.37 addons-524943 localhost minikube]
	I0313 23:27:37.538915   13081 provision.go:177] copyRemoteCerts
	I0313 23:27:37.538968   13081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:27:37.538988   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.541694   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.541997   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.542022   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.542195   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:37.542366   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.542542   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:37.542746   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:27:37.629609   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:27:37.655854   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:27:37.680444   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0313 23:27:37.705171   13081 provision.go:87] duration metric: took 420.598897ms to configureAuth
	I0313 23:27:37.705196   13081 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:27:37.705357   13081 config.go:182] Loaded profile config "addons-524943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:27:37.705422   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.707900   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.708215   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.708244   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.708389   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:37.708583   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.708750   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.708885   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:37.709074   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:37.709281   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:37.709297   13081 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:27:37.982496   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:27:37.982523   13081 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:27:37.982531   13081 main.go:141] libmachine: (addons-524943) Calling .GetURL
	I0313 23:27:37.983806   13081 main.go:141] libmachine: (addons-524943) DBG | Using libvirt version 6000000
	I0313 23:27:37.986174   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.986520   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.986595   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.986683   13081 main.go:141] libmachine: Docker is up and running!
	I0313 23:27:37.986701   13081 main.go:141] libmachine: Reticulating splines...
	I0313 23:27:37.986711   13081 client.go:171] duration metric: took 27.136874931s to LocalClient.Create
	I0313 23:27:37.986740   13081 start.go:167] duration metric: took 27.136939942s to libmachine.API.Create "addons-524943"
	I0313 23:27:37.986774   13081 start.go:293] postStartSetup for "addons-524943" (driver="kvm2")
	I0313 23:27:37.986791   13081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:27:37.986814   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:37.987036   13081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:27:37.987059   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:37.989279   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.989580   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:37.989612   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:37.989787   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:37.989948   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:37.990120   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:37.990269   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:27:38.077510   13081 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:27:38.082487   13081 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:27:38.082510   13081 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:27:38.082605   13081 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:27:38.082630   13081 start.go:296] duration metric: took 95.847795ms for postStartSetup
	I0313 23:27:38.082661   13081 main.go:141] libmachine: (addons-524943) Calling .GetConfigRaw
	I0313 23:27:38.083225   13081 main.go:141] libmachine: (addons-524943) Calling .GetIP
	I0313 23:27:38.086283   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.086678   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:38.086708   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.086927   13081 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/config.json ...
	I0313 23:27:38.087127   13081 start.go:128] duration metric: took 27.255103757s to createHost
	I0313 23:27:38.087153   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:38.089220   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.089534   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:38.089573   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.089728   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:38.089906   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:38.090073   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:38.090215   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:38.090350   13081 main.go:141] libmachine: Using SSH client type: native
	I0313 23:27:38.090560   13081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0313 23:27:38.090572   13081 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:27:38.207848   13081 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710372458.182625144
	
	I0313 23:27:38.207871   13081 fix.go:216] guest clock: 1710372458.182625144
	I0313 23:27:38.207881   13081 fix.go:229] Guest: 2024-03-13 23:27:38.182625144 +0000 UTC Remote: 2024-03-13 23:27:38.087139616 +0000 UTC m=+27.368837480 (delta=95.485528ms)
	I0313 23:27:38.207935   13081 fix.go:200] guest clock delta is within tolerance: 95.485528ms
	I0313 23:27:38.207943   13081 start.go:83] releasing machines lock for "addons-524943", held for 27.376018066s
	I0313 23:27:38.207971   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:38.208243   13081 main.go:141] libmachine: (addons-524943) Calling .GetIP
	I0313 23:27:38.210945   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.211339   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:38.211369   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.211466   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:38.211953   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:38.212102   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:27:38.212167   13081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:27:38.212242   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:38.212365   13081 ssh_runner.go:195] Run: cat /version.json
	I0313 23:27:38.212392   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:27:38.214648   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.214946   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:38.214971   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.214992   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.215150   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:38.215303   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:38.215427   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:38.215446   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:38.215477   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:38.215589   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:27:38.215648   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:27:38.215712   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:27:38.215840   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:27:38.216037   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:27:38.296328   13081 ssh_runner.go:195] Run: systemctl --version
	I0313 23:27:38.337368   13081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:27:38.499413   13081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:27:38.505598   13081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:27:38.505664   13081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:27:38.522775   13081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:27:38.522801   13081 start.go:494] detecting cgroup driver to use...
	I0313 23:27:38.522859   13081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:27:38.540116   13081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:27:38.557042   13081 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:27:38.557112   13081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:27:38.572123   13081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:27:38.587268   13081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:27:38.699360   13081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:27:38.857959   13081 docker.go:233] disabling docker service ...
	I0313 23:27:38.858022   13081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:27:38.873049   13081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:27:38.886062   13081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:27:39.012835   13081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:27:39.130161   13081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:27:39.146537   13081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:27:39.165350   13081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:27:39.165399   13081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:27:39.175348   13081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:27:39.175400   13081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:27:39.185636   13081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:27:39.195850   13081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:27:39.206401   13081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:27:39.216864   13081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:27:39.225805   13081 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:27:39.225851   13081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:27:39.239156   13081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:27:39.248362   13081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:27:39.366619   13081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:27:39.503010   13081 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:27:39.503086   13081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:27:39.508799   13081 start.go:562] Will wait 60s for crictl version
	I0313 23:27:39.508876   13081 ssh_runner.go:195] Run: which crictl
	I0313 23:27:39.512569   13081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:27:39.554612   13081 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:27:39.554723   13081 ssh_runner.go:195] Run: crio --version
	I0313 23:27:39.584105   13081 ssh_runner.go:195] Run: crio --version
	I0313 23:27:39.623289   13081 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:27:39.624603   13081 main.go:141] libmachine: (addons-524943) Calling .GetIP
	I0313 23:27:39.627453   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:39.627760   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:27:39.627794   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:27:39.627980   13081 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:27:39.632331   13081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:27:39.645721   13081 kubeadm.go:877] updating cluster {Name:addons-524943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-524943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:27:39.645820   13081 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:27:39.645858   13081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:27:39.677029   13081 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0313 23:27:39.677101   13081 ssh_runner.go:195] Run: which lz4
	I0313 23:27:39.681025   13081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0313 23:27:39.685170   13081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0313 23:27:39.685194   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0313 23:27:41.284945   13081 crio.go:444] duration metric: took 1.603929539s to copy over tarball
	I0313 23:27:41.285043   13081 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0313 23:27:44.092598   13081 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.80752379s)
	I0313 23:27:44.092629   13081 crio.go:451] duration metric: took 2.807648s to extract the tarball
	I0313 23:27:44.092639   13081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0313 23:27:44.136727   13081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:27:44.185802   13081 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:27:44.185825   13081 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:27:44.185833   13081 kubeadm.go:928] updating node { 192.168.39.37 8443 v1.28.4 crio true true} ...
	I0313 23:27:44.185927   13081 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-524943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-524943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:27:44.185991   13081 ssh_runner.go:195] Run: crio config
	I0313 23:27:44.243114   13081 cni.go:84] Creating CNI manager for ""
	I0313 23:27:44.243140   13081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:27:44.243152   13081 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:27:44.243171   13081 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.37 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-524943 NodeName:addons-524943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:27:44.243315   13081 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-524943"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:27:44.243374   13081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:27:44.253791   13081 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:27:44.253847   13081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0313 23:27:44.263777   13081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0313 23:27:44.281159   13081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:27:44.297983   13081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0313 23:27:44.315173   13081 ssh_runner.go:195] Run: grep 192.168.39.37	control-plane.minikube.internal$ /etc/hosts
	I0313 23:27:44.319350   13081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:27:44.331743   13081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:27:44.466930   13081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:27:44.492866   13081 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943 for IP: 192.168.39.37
	I0313 23:27:44.492895   13081 certs.go:194] generating shared ca certs ...
	I0313 23:27:44.492915   13081 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.493059   13081 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:27:44.598447   13081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt ...
	I0313 23:27:44.598481   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt: {Name:mkf96f5c832dd95c0a81d8dcfb8378e9d2fcc66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.598646   13081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key ...
	I0313 23:27:44.598664   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key: {Name:mkf8148796e0de1b07b35f06f94e4b482d6e7d9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.598732   13081 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:27:44.658397   13081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt ...
	I0313 23:27:44.658424   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt: {Name:mk1fb136abe99fe0ba52035e24b9481e4876b96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.658568   13081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key ...
	I0313 23:27:44.658582   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key: {Name:mk5c04717a94fd7ee9ef605371c0c0f2dda87ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.658642   13081 certs.go:256] generating profile certs ...
	I0313 23:27:44.658689   13081 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.key
	I0313 23:27:44.658702   13081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt with IP's: []
	I0313 23:27:44.829709   13081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt ...
	I0313 23:27:44.829740   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: {Name:mkbd8919480adbcde27f3ec7884302451383108e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.829895   13081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.key ...
	I0313 23:27:44.829906   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.key: {Name:mka922e4b25139e12e6f1bcb5ccde598c7b05129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:44.829970   13081 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key.99cc5794
	I0313 23:27:44.829987   13081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt.99cc5794 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.37]
	I0313 23:27:45.008348   13081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt.99cc5794 ...
	I0313 23:27:45.008380   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt.99cc5794: {Name:mk64b7c30452fc40af88bdac9d4171019e2145c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:45.008551   13081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key.99cc5794 ...
	I0313 23:27:45.008571   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key.99cc5794: {Name:mkbc950a77db2bb7e25bd5834eeed944a00ad9cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:45.008643   13081 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt.99cc5794 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt
	I0313 23:27:45.008709   13081 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key.99cc5794 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key
	I0313 23:27:45.008754   13081 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.key
	I0313 23:27:45.008767   13081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.crt with IP's: []
	I0313 23:27:45.107355   13081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.crt ...
	I0313 23:27:45.107383   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.crt: {Name:mk5a322cb93d930e0222948402e05a29249c4a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:45.107546   13081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.key ...
	I0313 23:27:45.107562   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.key: {Name:mk3b9fa55352612979d08a2746ae0794d3a2562c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:45.107764   13081 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:27:45.107800   13081 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:27:45.107823   13081 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:27:45.107846   13081 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:27:45.108365   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:27:45.137695   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:27:45.167532   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:27:45.213000   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:27:45.249190   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0313 23:27:45.274602   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:27:45.299347   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:27:45.324094   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:27:45.347706   13081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:27:45.371974   13081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:27:45.390019   13081 ssh_runner.go:195] Run: openssl version
	I0313 23:27:45.396486   13081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:27:45.409155   13081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:27:45.414017   13081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:27:45.414071   13081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:27:45.419916   13081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:27:45.431459   13081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:27:45.435428   13081 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:27:45.435475   13081 kubeadm.go:391] StartCluster: {Name:addons-524943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-524943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:27:45.435546   13081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:27:45.435584   13081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:27:45.476337   13081 cri.go:89] found id: ""
	I0313 23:27:45.476415   13081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0313 23:27:45.487173   13081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0313 23:27:45.497580   13081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0313 23:27:45.507809   13081 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0313 23:27:45.507828   13081 kubeadm.go:156] found existing configuration files:
	
	I0313 23:27:45.507864   13081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0313 23:27:45.517879   13081 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0313 23:27:45.518001   13081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0313 23:27:45.528721   13081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0313 23:27:45.538671   13081 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0313 23:27:45.538730   13081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0313 23:27:45.549077   13081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0313 23:27:45.558821   13081 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0313 23:27:45.558885   13081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0313 23:27:45.569093   13081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0313 23:27:45.578969   13081 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0313 23:27:45.579033   13081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0313 23:27:45.589185   13081 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0313 23:27:45.774491   13081 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0313 23:27:55.479043   13081 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0313 23:27:55.479141   13081 kubeadm.go:309] [preflight] Running pre-flight checks
	I0313 23:27:55.479238   13081 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0313 23:27:55.479349   13081 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0313 23:27:55.479463   13081 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0313 23:27:55.479560   13081 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0313 23:27:55.482214   13081 out.go:204]   - Generating certificates and keys ...
	I0313 23:27:55.482316   13081 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0313 23:27:55.482412   13081 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0313 23:27:55.482513   13081 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0313 23:27:55.482642   13081 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0313 23:27:55.482724   13081 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0313 23:27:55.482814   13081 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0313 23:27:55.482896   13081 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0313 23:27:55.483037   13081 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-524943 localhost] and IPs [192.168.39.37 127.0.0.1 ::1]
	I0313 23:27:55.483120   13081 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0313 23:27:55.483297   13081 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-524943 localhost] and IPs [192.168.39.37 127.0.0.1 ::1]
	I0313 23:27:55.483390   13081 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0313 23:27:55.483487   13081 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0313 23:27:55.483530   13081 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0313 23:27:55.483580   13081 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0313 23:27:55.483638   13081 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0313 23:27:55.483686   13081 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0313 23:27:55.483739   13081 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0313 23:27:55.483784   13081 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0313 23:27:55.483894   13081 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0313 23:27:55.483969   13081 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0313 23:27:55.485336   13081 out.go:204]   - Booting up control plane ...
	I0313 23:27:55.485442   13081 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0313 23:27:55.485543   13081 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0313 23:27:55.485639   13081 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0313 23:27:55.485790   13081 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0313 23:27:55.485902   13081 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0313 23:27:55.485962   13081 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0313 23:27:55.486130   13081 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0313 23:27:55.486195   13081 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502207 seconds
	I0313 23:27:55.486281   13081 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0313 23:27:55.486404   13081 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0313 23:27:55.486488   13081 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0313 23:27:55.486639   13081 kubeadm.go:309] [mark-control-plane] Marking the node addons-524943 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0313 23:27:55.486694   13081 kubeadm.go:309] [bootstrap-token] Using token: p2z3f1.3dhrzf23fqresnit
	I0313 23:27:55.488176   13081 out.go:204]   - Configuring RBAC rules ...
	I0313 23:27:55.488304   13081 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0313 23:27:55.488426   13081 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0313 23:27:55.488551   13081 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0313 23:27:55.488654   13081 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0313 23:27:55.488776   13081 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0313 23:27:55.488884   13081 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0313 23:27:55.488983   13081 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0313 23:27:55.489021   13081 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0313 23:27:55.489059   13081 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0313 23:27:55.489065   13081 kubeadm.go:309] 
	I0313 23:27:55.489120   13081 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0313 23:27:55.489127   13081 kubeadm.go:309] 
	I0313 23:27:55.489197   13081 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0313 23:27:55.489206   13081 kubeadm.go:309] 
	I0313 23:27:55.489235   13081 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0313 23:27:55.489285   13081 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0313 23:27:55.489330   13081 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0313 23:27:55.489336   13081 kubeadm.go:309] 
	I0313 23:27:55.489384   13081 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0313 23:27:55.489390   13081 kubeadm.go:309] 
	I0313 23:27:55.489437   13081 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0313 23:27:55.489450   13081 kubeadm.go:309] 
	I0313 23:27:55.489518   13081 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0313 23:27:55.489597   13081 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0313 23:27:55.489674   13081 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0313 23:27:55.489690   13081 kubeadm.go:309] 
	I0313 23:27:55.489802   13081 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0313 23:27:55.489902   13081 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0313 23:27:55.489912   13081 kubeadm.go:309] 
	I0313 23:27:55.490015   13081 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p2z3f1.3dhrzf23fqresnit \
	I0313 23:27:55.490166   13081 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c \
	I0313 23:27:55.490207   13081 kubeadm.go:309] 	--control-plane 
	I0313 23:27:55.490217   13081 kubeadm.go:309] 
	I0313 23:27:55.490326   13081 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0313 23:27:55.490336   13081 kubeadm.go:309] 
	I0313 23:27:55.490405   13081 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p2z3f1.3dhrzf23fqresnit \
	I0313 23:27:55.490507   13081 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c 
	I0313 23:27:55.490522   13081 cni.go:84] Creating CNI manager for ""
	I0313 23:27:55.490533   13081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:27:55.492896   13081 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0313 23:27:55.494079   13081 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0313 23:27:55.514283   13081 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0313 23:27:55.583395   13081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0313 23:27:55.583463   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:55.583472   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-524943 minikube.k8s.io/updated_at=2024_03_13T23_27_55_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=addons-524943 minikube.k8s.io/primary=true
	I0313 23:27:55.612575   13081 ops.go:34] apiserver oom_adj: -16
	I0313 23:27:55.725191   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:56.225320   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:56.725912   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:57.225519   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:57.725632   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:58.225557   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:58.725518   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:59.226071   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:27:59.725244   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:00.225548   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:00.726007   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:01.225634   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:01.725247   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:02.226250   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:02.726154   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:03.225904   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:03.726301   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:04.225601   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:04.725257   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:05.225848   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:05.725342   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:06.225752   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:06.725286   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:07.225883   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:07.725677   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:08.225398   13081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:28:08.317849   13081 kubeadm.go:1106] duration metric: took 12.734445441s to wait for elevateKubeSystemPrivileges
	W0313 23:28:08.317919   13081 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0313 23:28:08.317930   13081 kubeadm.go:393] duration metric: took 22.882459283s to StartCluster
	I0313 23:28:08.317951   13081 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:28:08.318085   13081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:28:08.318456   13081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:28:08.318683   13081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0313 23:28:08.318720   13081 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:28:08.320658   13081 out.go:177] * Verifying Kubernetes components...
	I0313 23:28:08.318800   13081 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0313 23:28:08.318916   13081 config.go:182] Loaded profile config "addons-524943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:28:08.321823   13081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:28:08.321828   13081 addons.go:69] Setting helm-tiller=true in profile "addons-524943"
	I0313 23:28:08.321842   13081 addons.go:69] Setting ingress-dns=true in profile "addons-524943"
	I0313 23:28:08.321846   13081 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-524943"
	I0313 23:28:08.321847   13081 addons.go:69] Setting storage-provisioner=true in profile "addons-524943"
	I0313 23:28:08.321869   13081 addons.go:234] Setting addon ingress-dns=true in "addons-524943"
	I0313 23:28:08.321834   13081 addons.go:69] Setting yakd=true in profile "addons-524943"
	I0313 23:28:08.321880   13081 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-524943"
	I0313 23:28:08.321885   13081 addons.go:234] Setting addon storage-provisioner=true in "addons-524943"
	I0313 23:28:08.321874   13081 addons.go:69] Setting volumesnapshots=true in profile "addons-524943"
	I0313 23:28:08.321888   13081 addons.go:69] Setting registry=true in profile "addons-524943"
	I0313 23:28:08.321903   13081 addons.go:69] Setting cloud-spanner=true in profile "addons-524943"
	I0313 23:28:08.321910   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.321917   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.321920   13081 addons.go:234] Setting addon cloud-spanner=true in "addons-524943"
	I0313 23:28:08.321923   13081 addons.go:234] Setting addon volumesnapshots=true in "addons-524943"
	I0313 23:28:08.321912   13081 addons.go:69] Setting default-storageclass=true in profile "addons-524943"
	I0313 23:28:08.321944   13081 addons.go:234] Setting addon registry=true in "addons-524943"
	I0313 23:28:08.321945   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.321963   13081 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-524943"
	I0313 23:28:08.321983   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.321984   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.321869   13081 addons.go:234] Setting addon helm-tiller=true in "addons-524943"
	I0313 23:28:08.322029   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.322336   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322342   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322342   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.321836   13081 addons.go:69] Setting ingress=true in profile "addons-524943"
	I0313 23:28:08.322362   13081 addons.go:69] Setting inspektor-gadget=true in profile "addons-524943"
	I0313 23:28:08.322363   13081 addons.go:69] Setting gcp-auth=true in profile "addons-524943"
	I0313 23:28:08.322372   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322374   13081 addons.go:234] Setting addon ingress=true in "addons-524943"
	I0313 23:28:08.322385   13081 mustload.go:65] Loading cluster: addons-524943
	I0313 23:28:08.322388   13081 addons.go:234] Setting addon inspektor-gadget=true in "addons-524943"
	I0313 23:28:08.321895   13081 addons.go:234] Setting addon yakd=true in "addons-524943"
	I0313 23:28:08.322400   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.322409   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.322403   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322417   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.322541   13081 config.go:182] Loaded profile config "addons-524943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:28:08.322631   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322656   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322721   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322730   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322745   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322756   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322783   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322824   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322827   13081 addons.go:69] Setting metrics-server=true in profile "addons-524943"
	I0313 23:28:08.322849   13081 addons.go:234] Setting addon metrics-server=true in "addons-524943"
	I0313 23:28:08.322870   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.322872   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.322896   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.321828   13081 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-524943"
	I0313 23:28:08.323027   13081 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-524943"
	I0313 23:28:08.323056   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.323206   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.323223   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322373   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.323271   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.322384   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.323548   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.323573   13081 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-524943"
	I0313 23:28:08.323699   13081 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-524943"
	I0313 23:28:08.323744   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.323818   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.323848   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.324117   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.324158   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.327227   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.343013   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0313 23:28:08.343127   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0313 23:28:08.343489   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.343535   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.344030   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0313 23:28:08.344100   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.344115   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.344293   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.344310   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.344646   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.344689   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.344711   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.345376   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.345413   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.345523   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.345550   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.345923   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0313 23:28:08.346019   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.346036   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.346489   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.346814   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.347088   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.347123   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.347216   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.347235   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.347721   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.347926   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.349951   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.350307   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43453
	I0313 23:28:08.350312   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.350350   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.350794   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.351423   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.351442   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.352197   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.352703   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.352737   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.359053   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.359097   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.359991   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.360029   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.371205   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42819
	I0313 23:28:08.372935   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0313 23:28:08.373535   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.374203   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.374223   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.374529   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0313 23:28:08.374693   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.374913   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.375556   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.375582   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.376067   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.376090   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.376224   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.376243   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.376693   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.376747   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.376851   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.377246   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.377280   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.377540   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.380457   13081 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0313 23:28:08.381844   13081 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0313 23:28:08.381862   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0313 23:28:08.381882   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.381672   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0313 23:28:08.383011   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.383658   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.383676   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.384289   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.384570   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.385803   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.386241   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.386264   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.386565   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.386754   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.386979   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.387136   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.388363   13081 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-524943"
	I0313 23:28:08.388402   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.389351   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.389386   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.389659   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I0313 23:28:08.390582   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.392087   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.392105   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.392453   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.392523   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0313 23:28:08.392696   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.392841   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.393050   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0313 23:28:08.393320   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.393333   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.393628   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.393770   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.394071   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.394612   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.394627   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.395003   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.395231   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.395353   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.397347   13081 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0313 23:28:08.398677   13081 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0313 23:28:08.398702   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0313 23:28:08.397349   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.398720   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.400078   13081 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0313 23:28:08.399783   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.401296   13081 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:28:08.402219   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.403200   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.403223   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.403232   13081 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0313 23:28:08.402804   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.404546   13081 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0313 23:28:08.404560   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0313 23:28:08.404577   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.403158   13081 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:28:08.404769   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.405932   13081 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0313 23:28:08.405953   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0313 23:28:08.405969   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.406154   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.406292   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.408545   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.409874   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.410535   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.411390   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.411726   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.411749   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.411781   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.414591   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0313 23:28:08.414631   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46851
	I0313 23:28:08.414755   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0313 23:28:08.414775   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.414862   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.414910   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.415208   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.415215   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.415280   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.415516   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.415556   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I0313 23:28:08.415664   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.415830   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.416081   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.416082   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.416191   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.416225   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.416557   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.416629   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.416641   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.416666   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.416676   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.416761   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.416774   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.416995   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.417051   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.417268   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.417527   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.417548   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.417553   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.417582   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.418598   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0313 23:28:08.418719   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0313 23:28:08.419324   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.419349   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0313 23:28:08.419329   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.419365   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.419843   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.419878   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.419846   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.419924   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.420039   13081 addons.go:234] Setting addon default-storageclass=true in "addons-524943"
	I0313 23:28:08.420093   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:08.420253   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.420394   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.420414   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.420436   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.420473   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.420474   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.421039   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.421079   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.421274   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.421499   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.421517   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.421579   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I0313 23:28:08.421801   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.421839   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.422036   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.423400   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.423434   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.423598   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0313 23:28:08.423942   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.424361   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.424384   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.424722   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.425225   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.425253   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.427079   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.427591   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.427606   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.428062   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.428622   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.428645   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.436965   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0313 23:28:08.437546   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.438080   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.438103   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.438463   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.438630   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.440651   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.442858   13081 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0313 23:28:08.444220   13081 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0313 23:28:08.444243   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0313 23:28:08.444261   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.447794   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.448151   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.448179   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.448366   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.448515   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.448647   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.448752   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.458293   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I0313 23:28:08.458892   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0313 23:28:08.458927   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0313 23:28:08.459102   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.459336   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.459660   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.459678   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.459774   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.459791   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.460026   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.460164   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.460220   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.461161   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:08.461189   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:08.462023   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.462098   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0313 23:28:08.462842   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.464314   13081 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0313 23:28:08.465613   13081 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0313 23:28:08.465631   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0313 23:28:08.465648   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.463786   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.465710   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.464228   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I0313 23:28:08.464611   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0313 23:28:08.464654   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.465239   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0313 23:28:08.465473   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0313 23:28:08.466204   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.466286   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.466479   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.466494   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.466583   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.466638   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.466651   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.466851   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.467021   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.467093   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.467411   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.467501   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.468169   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.468192   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.468262   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.468724   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.468740   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.468799   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.468973   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.469038   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.469160   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.469343   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.469988   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.470005   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.470106   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.470243   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.472007   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0313 23:28:08.470604   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.470700   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.471150   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.471480   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.471586   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.471862   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.471951   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.473337   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0313 23:28:08.473348   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0313 23:28:08.473361   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.473376   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.474699   13081 out.go:177]   - Using image docker.io/registry:2.8.3
	I0313 23:28:08.473936   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.473999   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.475932   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.476353   13081 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0313 23:28:08.476437   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.477457   13081 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0313 23:28:08.477466   13081 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0313 23:28:08.477643   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.477656   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0313 23:28:08.478240   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.478722   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0313 23:28:08.479821   13081 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0313 23:28:08.479839   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0313 23:28:08.479857   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.478749   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.478966   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.479093   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.479359   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.481121   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0313 23:28:08.481156   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.482329   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0313 23:28:08.482393   13081 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0313 23:28:08.482442   13081 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0313 23:28:08.483437   13081 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0313 23:28:08.483451   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0313 23:28:08.484482   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.484507   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.484563   13081 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:28:08.484587   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0313 23:28:08.484606   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.483472   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.482837   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.483216   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.483529   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0313 23:28:08.482495   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.483942   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:08.484899   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.485885   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0313 23:28:08.487090   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0313 23:28:08.485929   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.485960   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.486077   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.486097   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.486276   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.486386   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:08.487960   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.489982   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0313 23:28:08.488540   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:08.488613   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.488615   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.488689   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.488765   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.490160   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.490787   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.491221   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.491296   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.492611   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0313 23:28:08.491333   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.491529   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.491535   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.491963   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:08.491975   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.492534   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.491437   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.492643   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.494270   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0313 23:28:08.494399   13081 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0313 23:28:08.494404   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.494618   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.494649   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.494661   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.494818   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:08.496313   13081 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0313 23:28:08.497566   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0313 23:28:08.498802   13081 out.go:177]   - Using image docker.io/busybox:stable
	I0313 23:28:08.496411   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.496483   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.496534   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.496524   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.497605   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0313 23:28:08.498256   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:08.500031   13081 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0313 23:28:08.500048   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0313 23:28:08.500060   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.500111   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.500266   13081 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0313 23:28:08.500280   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0313 23:28:08.500293   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:08.500405   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	W0313 23:28:08.501415   13081 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36274->192.168.39.37:22: read: connection reset by peer
	I0313 23:28:08.501450   13081 retry.go:31] will retry after 317.880791ms: ssh: handshake failed: read tcp 192.168.39.1:36274->192.168.39.37:22: read: connection reset by peer
	I0313 23:28:08.503760   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.503986   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.504137   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.504155   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.504265   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.504290   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.504430   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.504511   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.504539   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.504561   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.504606   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.504675   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.504910   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:08.504930   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:08.504931   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:08.504915   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.505049   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:08.505092   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.505197   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.505418   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:08.505521   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:08.808786   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0313 23:28:08.881873   13081 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0313 23:28:08.881898   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0313 23:28:08.898326   13081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:28:08.898334   13081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0313 23:28:08.938276   13081 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0313 23:28:08.938307   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0313 23:28:08.971698   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0313 23:28:08.998928   13081 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0313 23:28:08.998949   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0313 23:28:09.006193   13081 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0313 23:28:09.006218   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0313 23:28:09.024729   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0313 23:28:09.028838   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:28:09.051141   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0313 23:28:09.071912   13081 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0313 23:28:09.071940   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0313 23:28:09.087073   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0313 23:28:09.087096   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0313 23:28:09.107606   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0313 23:28:09.108892   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0313 23:28:09.128367   13081 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0313 23:28:09.128390   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0313 23:28:09.172375   13081 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0313 23:28:09.172415   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0313 23:28:09.311291   13081 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0313 23:28:09.311311   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0313 23:28:09.328723   13081 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0313 23:28:09.328747   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0313 23:28:09.426960   13081 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0313 23:28:09.426984   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0313 23:28:09.429365   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0313 23:28:09.472365   13081 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0313 23:28:09.472388   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0313 23:28:09.475155   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0313 23:28:09.475178   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0313 23:28:09.504997   13081 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0313 23:28:09.505034   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0313 23:28:09.520290   13081 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0313 23:28:09.520313   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0313 23:28:09.552584   13081 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0313 23:28:09.552609   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0313 23:28:09.696598   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0313 23:28:09.696625   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0313 23:28:09.700325   13081 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0313 23:28:09.700343   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0313 23:28:09.726528   13081 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0313 23:28:09.726560   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0313 23:28:09.751142   13081 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0313 23:28:09.751167   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0313 23:28:09.877456   13081 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0313 23:28:09.877481   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0313 23:28:09.921067   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0313 23:28:09.921090   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0313 23:28:09.967009   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0313 23:28:09.997308   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0313 23:28:10.014731   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0313 23:28:10.022714   13081 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0313 23:28:10.022740   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0313 23:28:10.071496   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0313 23:28:10.071526   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0313 23:28:10.173328   13081 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:28:10.173347   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0313 23:28:10.359973   13081 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0313 23:28:10.359995   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0313 23:28:10.372577   13081 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0313 23:28:10.372602   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0313 23:28:10.496932   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:28:10.671726   13081 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0313 23:28:10.671750   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0313 23:28:10.736640   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0313 23:28:10.736660   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0313 23:28:10.789403   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0313 23:28:10.915585   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0313 23:28:10.915614   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0313 23:28:11.063247   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0313 23:28:11.063275   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0313 23:28:11.485746   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0313 23:28:11.485771   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0313 23:28:12.054800   13081 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0313 23:28:12.054826   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0313 23:28:12.499675   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0313 23:28:14.092031   13081 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.193599382s)
	I0313 23:28:14.092046   13081 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.193678939s)
	I0313 23:28:14.092066   13081 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0313 23:28:14.092766   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.28395064s)
	I0313 23:28:14.092810   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:14.092822   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:14.093292   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:14.093315   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:14.093325   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:14.093334   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:14.093294   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:14.093653   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:14.093672   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:14.115781   13081 node_ready.go:35] waiting up to 6m0s for node "addons-524943" to be "Ready" ...
	I0313 23:28:14.122711   13081 node_ready.go:49] node "addons-524943" has status "Ready":"True"
	I0313 23:28:14.122736   13081 node_ready.go:38] duration metric: took 6.925607ms for node "addons-524943" to be "Ready" ...
	I0313 23:28:14.122747   13081 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:28:14.137498   13081 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:14.912050   13081 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-524943" context rescaled to 1 replicas
	I0313 23:28:15.015165   13081 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0313 23:28:15.015200   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:15.017973   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:15.018379   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:15.018416   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:15.018543   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:15.018750   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:15.018932   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:15.019083   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:15.544329   13081 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0313 23:28:15.728610   13081 addons.go:234] Setting addon gcp-auth=true in "addons-524943"
	I0313 23:28:15.728662   13081 host.go:66] Checking if "addons-524943" exists ...
	I0313 23:28:15.728993   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:15.729023   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:15.744273   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44985
	I0313 23:28:15.744712   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:15.745272   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:15.745296   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:15.745574   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:15.746212   13081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:28:15.746254   13081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:28:15.761357   13081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0313 23:28:15.761813   13081 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:28:15.762323   13081 main.go:141] libmachine: Using API Version  1
	I0313 23:28:15.762349   13081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:28:15.762720   13081 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:28:15.762931   13081 main.go:141] libmachine: (addons-524943) Calling .GetState
	I0313 23:28:15.764613   13081 main.go:141] libmachine: (addons-524943) Calling .DriverName
	I0313 23:28:15.765987   13081 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0313 23:28:15.766015   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHHostname
	I0313 23:28:15.768673   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:15.769125   13081 main.go:141] libmachine: (addons-524943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:7c:3b", ip: ""} in network mk-addons-524943: {Iface:virbr1 ExpiryTime:2024-03-14 00:27:26 +0000 UTC Type:0 Mac:52:54:00:de:7c:3b Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:addons-524943 Clientid:01:52:54:00:de:7c:3b}
	I0313 23:28:15.769156   13081 main.go:141] libmachine: (addons-524943) DBG | domain addons-524943 has defined IP address 192.168.39.37 and MAC address 52:54:00:de:7c:3b in network mk-addons-524943
	I0313 23:28:15.769256   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHPort
	I0313 23:28:15.769464   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHKeyPath
	I0313 23:28:15.769630   13081 main.go:141] libmachine: (addons-524943) Calling .GetSSHUsername
	I0313 23:28:15.769784   13081 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/addons-524943/id_rsa Username:docker}
	I0313 23:28:16.239709   13081 pod_ready.go:102] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:18.465156   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.440391381s)
	I0313 23:28:18.465235   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465257   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465257   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.436389089s)
	I0313 23:28:18.465258   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.493530469s)
	I0313 23:28:18.465321   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.414152824s)
	I0313 23:28:18.465327   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465340   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465351   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465353   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465290   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465385   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465414   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.357780792s)
	I0313 23:28:18.465429   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465438   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465484   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.465493   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.465495   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.356579038s)
	I0313 23:28:18.465502   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465510   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465519   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465562   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465580   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.036190434s)
	I0313 23:28:18.465597   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465605   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465660   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.498625729s)
	I0313 23:28:18.465681   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465690   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465711   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.465743   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.465773   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.468439687s)
	I0313 23:28:18.465781   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.465789   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465798   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465829   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.451071307s)
	I0313 23:28:18.465849   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465858   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465789   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.465872   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.465987   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.969023759s)
	W0313 23:28:18.466017   13081 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0313 23:28:18.466058   13081 retry.go:31] will retry after 205.513501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0313 23:28:18.466128   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.676689718s)
	I0313 23:28:18.466148   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466156   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466220   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466243   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466250   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466257   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466263   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466716   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466745   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466752   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466760   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466792   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466798   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466805   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466815   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466816   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466818   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466823   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466835   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466840   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466849   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466859   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466865   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466873   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466878   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466882   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466887   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466890   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466895   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466897   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466902   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466906   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466924   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466931   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466946   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466947   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466956   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.466965   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.466972   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.466979   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.466965   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.467000   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.467007   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.467015   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.467022   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.467069   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.467087   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.467094   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.467330   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.467349   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.467358   13081 addons.go:470] Verifying addon metrics-server=true in "addons-524943"
	I0313 23:28:18.467824   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.467842   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.467873   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.467880   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.467887   13081 addons.go:470] Verifying addon registry=true in "addons-524943"
	I0313 23:28:18.469825   13081 out.go:177] * Verifying registry addon...
	I0313 23:28:18.468130   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.469039   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.469064   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.469682   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.469700   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.469728   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.469741   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.469764   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.466805   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.469783   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.469785   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.471196   13081 addons.go:470] Verifying addon ingress=true in "addons-524943"
	I0313 23:28:18.474170   13081 out.go:177] * Verifying ingress addon...
	I0313 23:28:18.471916   13081 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0313 23:28:18.471966   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.471982   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.471986   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.471986   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.471998   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.472198   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:18.472212   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.476015   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.477810   13081 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-524943 service yakd-dashboard -n yakd-dashboard
	
	I0313 23:28:18.476079   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.476711   13081 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0313 23:28:18.477856   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.479766   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.479800   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.500537   13081 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0313 23:28:18.500562   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:18.500862   13081 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0313 23:28:18.500879   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:18.506774   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.506798   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.507077   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.507094   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	W0313 23:28:18.507186   13081 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0313 23:28:18.533966   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:18.533988   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:18.534265   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:18.534284   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:18.649278   13081 pod_ready.go:102] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:18.672213   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:28:19.003746   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:19.004005   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:19.572923   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:19.573409   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:19.852548   13081 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.08653439s)
	I0313 23:28:19.854233   13081 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:28:19.852773   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.353040973s)
	I0313 23:28:19.854291   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:19.854310   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:19.854642   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:19.854667   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:19.855770   13081 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0313 23:28:19.857225   13081 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0313 23:28:19.857245   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0313 23:28:19.855785   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:19.857302   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:19.857319   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:19.857580   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:19.857597   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:19.857607   13081 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-524943"
	I0313 23:28:19.859203   13081 out.go:177] * Verifying csi-hostpath-driver addon...
	I0313 23:28:19.861785   13081 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0313 23:28:19.898336   13081 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0313 23:28:19.898355   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:19.963317   13081 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0313 23:28:19.963341   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0313 23:28:19.992739   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:19.998241   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:20.058316   13081 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0313 23:28:20.058337   13081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0313 23:28:20.086427   13081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0313 23:28:20.404434   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:20.490992   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:20.529954   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:20.687519   13081 pod_ready.go:102] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:20.868498   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:20.982848   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:20.983137   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:21.371381   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:21.485114   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:21.491715   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:21.790153   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.117888878s)
	I0313 23:28:21.790222   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:21.790238   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:21.790527   13081 main.go:141] libmachine: (addons-524943) DBG | Closing plugin on server side
	I0313 23:28:21.790571   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:21.790587   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:21.790603   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:21.790614   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:21.790840   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:21.790867   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:21.872959   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:22.019388   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:22.023853   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:22.143272   13081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.056797s)
	I0313 23:28:22.143334   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:22.143347   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:22.143631   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:22.143663   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:22.143672   13081 main.go:141] libmachine: Making call to close driver server
	I0313 23:28:22.143681   13081 main.go:141] libmachine: (addons-524943) Calling .Close
	I0313 23:28:22.143988   13081 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:28:22.144004   13081 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:28:22.144926   13081 addons.go:470] Verifying addon gcp-auth=true in "addons-524943"
	I0313 23:28:22.146704   13081 out.go:177] * Verifying gcp-auth addon...
	I0313 23:28:22.148872   13081 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0313 23:28:22.175142   13081 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0313 23:28:22.175167   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:22.368694   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:22.482353   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:22.484395   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:22.652528   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:22.867971   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:22.990139   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:22.994156   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:23.144598   13081 pod_ready.go:102] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:23.152598   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:23.367721   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:23.482630   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:23.485815   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:23.652361   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:23.868034   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:23.988495   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:23.988828   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:24.152192   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:24.369065   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:24.484411   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:24.484522   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:24.653876   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:24.868138   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:24.987482   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:24.987533   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:25.144673   13081 pod_ready.go:102] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:25.152272   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:25.367994   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:25.482808   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:25.485612   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:25.654636   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:26.192173   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:26.196288   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:26.196289   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:26.200743   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:26.367455   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:26.482216   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:26.482503   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:26.657303   13081 pod_ready.go:97] pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.37 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-13 23:28:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:ni
l Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-13 23:28:12 +0000 UTC,FinishedAt:2024-03-13 23:28:22 +0000 UTC,ContainerID:cri-o://dd8a57a19a1b6e70e1d0f17bf247ed1e4a9e1897bab52070e949fb73f7dbec98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://dd8a57a19a1b6e70e1d0f17bf247ed1e4a9e1897bab52070e949fb73f7dbec98 Started:0xc00351e430 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0313 23:28:26.657337   13081 pod_ready.go:81] duration metric: took 12.51980737s for pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace to be "Ready" ...
	E0313 23:28:26.657349   13081 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-glfcq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-13 23:28:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.37 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-13 23:28:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns
State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-13 23:28:12 +0000 UTC,FinishedAt:2024-03-13 23:28:22 +0000 UTC,ContainerID:cri-o://dd8a57a19a1b6e70e1d0f17bf247ed1e4a9e1897bab52070e949fb73f7dbec98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://dd8a57a19a1b6e70e1d0f17bf247ed1e4a9e1897bab52070e949fb73f7dbec98 Started:0xc00351e430 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0313 23:28:26.657360   13081 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:26.657862   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:26.870882   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:26.981811   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:26.984268   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:27.153203   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:27.367739   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:27.486938   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:27.488116   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:27.652424   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:27.867802   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:27.980658   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:27.983476   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:28.153535   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:28.367584   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:28.481813   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:28.484001   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:28.653173   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:28.663775   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:28.868775   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:28.982083   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:28.982432   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:29.153475   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:29.369219   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:29.483554   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:29.484981   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:29.653363   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:29.870035   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:29.982612   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:29.985149   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:30.153705   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:30.368395   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:30.482430   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:30.484230   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:30.653304   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:30.664208   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:30.869140   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:30.981728   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:30.984472   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:31.152505   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:31.368316   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:31.486638   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:31.487074   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:31.652694   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:31.870902   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:31.982960   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:31.985571   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:32.152671   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:32.368804   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:32.483929   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:32.484610   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:32.654626   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:32.666081   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:32.869198   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:32.983701   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:32.992023   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:33.154262   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:33.368152   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:33.481596   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:33.482672   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:33.653384   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:33.867666   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:33.990976   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:33.995001   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:34.153360   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:34.369563   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:34.484241   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:34.484614   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:34.652870   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:34.868400   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:34.983128   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:34.983899   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:35.153425   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:35.164916   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:35.369069   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:35.483062   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:35.488729   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:35.652412   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:35.869056   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:35.982380   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:35.983764   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:36.153224   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:36.940653   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:36.941003   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:36.943663   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:36.950035   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:36.960638   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:36.998122   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:37.000017   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:37.153981   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:37.372148   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:37.481253   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:37.484109   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:37.653633   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:37.663571   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:37.868225   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:37.981036   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:37.982911   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:38.153011   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:38.372628   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:38.481861   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:38.482620   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:38.653621   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:38.867722   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:38.981400   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:38.984171   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:39.157440   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:39.368577   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:39.481989   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:39.483303   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:39.652344   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:39.676351   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:39.867571   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:39.981895   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:39.983592   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:40.152508   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:40.368783   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:40.482173   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:40.484908   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:40.653247   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:40.870079   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:40.981673   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:40.982644   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:41.153458   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:41.368831   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:41.480856   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:41.482729   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:41.652834   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:41.868502   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:41.981733   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:41.985953   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:42.153648   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:42.163718   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:42.370075   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:42.484868   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:42.497547   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:42.653354   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:42.873805   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:42.984379   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:42.984594   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:43.152746   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:43.368190   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:43.481838   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:43.482647   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:43.652897   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:43.872268   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:43.982224   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:43.982420   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:44.157220   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:44.164095   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:44.369538   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:44.482843   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:44.483935   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:44.653129   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:44.868764   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:44.980723   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:44.984623   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:45.152413   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:45.368727   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:45.482042   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:45.482174   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:45.653594   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:45.868384   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:45.981999   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:45.983329   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:46.154233   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:46.164186   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:46.367063   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:46.481280   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:46.483673   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:46.653129   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:46.867318   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:47.246417   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:47.257487   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:47.257601   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:47.369053   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:47.481360   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:47.484275   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:47.653403   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:47.868176   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:47.982159   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:47.986361   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:48.153745   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:48.164686   13081 pod_ready.go:102] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"False"
	I0313 23:28:48.366842   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:48.482612   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:48.483171   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:48.653106   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:48.867447   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:48.981682   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:48.981923   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:49.153055   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:49.368827   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:49.481252   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:49.482500   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:49.656100   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:49.867838   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:49.981333   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:49.983014   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:50.153267   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:50.164001   13081 pod_ready.go:92] pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.164023   13081 pod_ready.go:81] duration metric: took 23.506654666s for pod "coredns-5dd5756b68-k4p5l" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.164032   13081 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.168861   13081 pod_ready.go:92] pod "etcd-addons-524943" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.168880   13081 pod_ready.go:81] duration metric: took 4.84312ms for pod "etcd-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.168888   13081 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.174019   13081 pod_ready.go:92] pod "kube-apiserver-addons-524943" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.174037   13081 pod_ready.go:81] duration metric: took 5.142827ms for pod "kube-apiserver-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.174045   13081 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.181867   13081 pod_ready.go:92] pod "kube-controller-manager-addons-524943" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.181884   13081 pod_ready.go:81] duration metric: took 7.832955ms for pod "kube-controller-manager-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.181893   13081 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bng88" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.190801   13081 pod_ready.go:92] pod "kube-proxy-bng88" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.190819   13081 pod_ready.go:81] duration metric: took 8.920774ms for pod "kube-proxy-bng88" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.190830   13081 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.367328   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:50.480945   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:50.485276   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:50.562006   13081 pod_ready.go:92] pod "kube-scheduler-addons-524943" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.562025   13081 pod_ready.go:81] duration metric: took 371.188219ms for pod "kube-scheduler-addons-524943" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.562035   13081 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gfg8n" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.653843   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:50.868739   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:50.962467   13081 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gfg8n" in "kube-system" namespace has status "Ready":"True"
	I0313 23:28:50.962488   13081 pod_ready.go:81] duration metric: took 400.447649ms for pod "nvidia-device-plugin-daemonset-gfg8n" in "kube-system" namespace to be "Ready" ...
	I0313 23:28:50.962496   13081 pod_ready.go:38] duration metric: took 36.839735926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:28:50.962511   13081 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:28:50.962558   13081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:28:50.982563   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:50.985899   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:51.005753   13081 api_server.go:72] duration metric: took 42.6869974s to wait for apiserver process to appear ...
	I0313 23:28:51.005780   13081 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:28:51.005802   13081 api_server.go:253] Checking apiserver healthz at https://192.168.39.37:8443/healthz ...
	I0313 23:28:51.012120   13081 api_server.go:279] https://192.168.39.37:8443/healthz returned 200:
	ok
	I0313 23:28:51.013322   13081 api_server.go:141] control plane version: v1.28.4
	I0313 23:28:51.013344   13081 api_server.go:131] duration metric: took 7.555882ms to wait for apiserver health ...
	I0313 23:28:51.013354   13081 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:28:51.153150   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:51.168217   13081 system_pods.go:59] 18 kube-system pods found
	I0313 23:28:51.168247   13081 system_pods.go:61] "coredns-5dd5756b68-k4p5l" [dca802b0-35f0-4fe2-9a93-83183585beea] Running
	I0313 23:28:51.168257   13081 system_pods.go:61] "csi-hostpath-attacher-0" [9465fc03-7fbb-4517-b234-94900b82f106] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0313 23:28:51.168265   13081 system_pods.go:61] "csi-hostpath-resizer-0" [c50ce4a5-534b-4288-97a0-4b87e8f8c44e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0313 23:28:51.168275   13081 system_pods.go:61] "csi-hostpathplugin-bl52v" [de22fba7-939f-4017-b5d7-93284a6052cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0313 23:28:51.168282   13081 system_pods.go:61] "etcd-addons-524943" [6fb1eda7-8058-4146-b4e6-605d4362ed04] Running
	I0313 23:28:51.168292   13081 system_pods.go:61] "kube-apiserver-addons-524943" [f3bcb0b6-ca88-4038-a4fa-0d2f3812091e] Running
	I0313 23:28:51.168298   13081 system_pods.go:61] "kube-controller-manager-addons-524943" [3096c397-2fc5-4161-b104-e4775cefb4f2] Running
	I0313 23:28:51.168304   13081 system_pods.go:61] "kube-ingress-dns-minikube" [d224570a-7241-4372-8b9b-1fd3309f4da1] Running
	I0313 23:28:51.168312   13081 system_pods.go:61] "kube-proxy-bng88" [ea0067f4-d8a6-4728-8e4a-b42fab6607ae] Running
	I0313 23:28:51.168317   13081 system_pods.go:61] "kube-scheduler-addons-524943" [13f9ce35-9ef0-425e-a093-7e5e8553d440] Running
	I0313 23:28:51.168324   13081 system_pods.go:61] "metrics-server-69cf46c98-q6mlw" [64934ba6-025a-4498-a9a0-16c88811d1e7] Running
	I0313 23:28:51.168329   13081 system_pods.go:61] "nvidia-device-plugin-daemonset-gfg8n" [b18807a5-a89a-4b4a-bce8-2cf7ba25d3c2] Running
	I0313 23:28:51.168338   13081 system_pods.go:61] "registry-proxy-x4zxx" [07ee3fab-197e-40bc-9c11-42d8c9f9ab20] Running
	I0313 23:28:51.168347   13081 system_pods.go:61] "registry-slzzm" [1bb6324b-9959-47c3-94b5-7217cd8ac6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0313 23:28:51.168360   13081 system_pods.go:61] "snapshot-controller-58dbcc7b99-2l4w5" [3b90add7-7f44-41ac-8e79-0f80ad72b22c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:28:51.168374   13081 system_pods.go:61] "snapshot-controller-58dbcc7b99-s2qmh" [bb22c3df-0e0d-429f-970c-27058b653434] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:28:51.168382   13081 system_pods.go:61] "storage-provisioner" [88b50953-304b-4510-97f8-de1781f722c7] Running
	I0313 23:28:51.168391   13081 system_pods.go:61] "tiller-deploy-7b677967b9-rmstt" [bf228b28-7f98-4e4b-ba99-5547d3ad59eb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0313 23:28:51.168402   13081 system_pods.go:74] duration metric: took 155.040609ms to wait for pod list to return data ...
	I0313 23:28:51.168416   13081 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:28:51.361533   13081 default_sa.go:45] found service account: "default"
	I0313 23:28:51.361568   13081 default_sa.go:55] duration metric: took 193.14196ms for default service account to be created ...
	I0313 23:28:51.361581   13081 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:28:51.368463   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:51.483327   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:51.485292   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:51.585826   13081 system_pods.go:86] 18 kube-system pods found
	I0313 23:28:51.585851   13081 system_pods.go:89] "coredns-5dd5756b68-k4p5l" [dca802b0-35f0-4fe2-9a93-83183585beea] Running
	I0313 23:28:51.585860   13081 system_pods.go:89] "csi-hostpath-attacher-0" [9465fc03-7fbb-4517-b234-94900b82f106] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0313 23:28:51.585866   13081 system_pods.go:89] "csi-hostpath-resizer-0" [c50ce4a5-534b-4288-97a0-4b87e8f8c44e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0313 23:28:51.585875   13081 system_pods.go:89] "csi-hostpathplugin-bl52v" [de22fba7-939f-4017-b5d7-93284a6052cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0313 23:28:51.585880   13081 system_pods.go:89] "etcd-addons-524943" [6fb1eda7-8058-4146-b4e6-605d4362ed04] Running
	I0313 23:28:51.585886   13081 system_pods.go:89] "kube-apiserver-addons-524943" [f3bcb0b6-ca88-4038-a4fa-0d2f3812091e] Running
	I0313 23:28:51.585890   13081 system_pods.go:89] "kube-controller-manager-addons-524943" [3096c397-2fc5-4161-b104-e4775cefb4f2] Running
	I0313 23:28:51.585894   13081 system_pods.go:89] "kube-ingress-dns-minikube" [d224570a-7241-4372-8b9b-1fd3309f4da1] Running
	I0313 23:28:51.585901   13081 system_pods.go:89] "kube-proxy-bng88" [ea0067f4-d8a6-4728-8e4a-b42fab6607ae] Running
	I0313 23:28:51.585905   13081 system_pods.go:89] "kube-scheduler-addons-524943" [13f9ce35-9ef0-425e-a093-7e5e8553d440] Running
	I0313 23:28:51.585909   13081 system_pods.go:89] "metrics-server-69cf46c98-q6mlw" [64934ba6-025a-4498-a9a0-16c88811d1e7] Running
	I0313 23:28:51.585913   13081 system_pods.go:89] "nvidia-device-plugin-daemonset-gfg8n" [b18807a5-a89a-4b4a-bce8-2cf7ba25d3c2] Running
	I0313 23:28:51.585917   13081 system_pods.go:89] "registry-proxy-x4zxx" [07ee3fab-197e-40bc-9c11-42d8c9f9ab20] Running
	I0313 23:28:51.585923   13081 system_pods.go:89] "registry-slzzm" [1bb6324b-9959-47c3-94b5-7217cd8ac6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0313 23:28:51.585931   13081 system_pods.go:89] "snapshot-controller-58dbcc7b99-2l4w5" [3b90add7-7f44-41ac-8e79-0f80ad72b22c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:28:51.585937   13081 system_pods.go:89] "snapshot-controller-58dbcc7b99-s2qmh" [bb22c3df-0e0d-429f-970c-27058b653434] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:28:51.585941   13081 system_pods.go:89] "storage-provisioner" [88b50953-304b-4510-97f8-de1781f722c7] Running
	I0313 23:28:51.585945   13081 system_pods.go:89] "tiller-deploy-7b677967b9-rmstt" [bf228b28-7f98-4e4b-ba99-5547d3ad59eb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0313 23:28:51.585950   13081 system_pods.go:126] duration metric: took 224.364737ms to wait for k8s-apps to be running ...
	I0313 23:28:51.585957   13081 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:28:51.585998   13081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:28:51.619014   13081 system_svc.go:56] duration metric: took 33.046582ms WaitForService to wait for kubelet
	I0313 23:28:51.619046   13081 kubeadm.go:576] duration metric: took 43.300292451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:28:51.619074   13081 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:28:51.662658   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:51.762166   13081 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:28:51.762191   13081 node_conditions.go:123] node cpu capacity is 2
	I0313 23:28:51.762202   13081 node_conditions.go:105] duration metric: took 143.122996ms to run NodePressure ...
	I0313 23:28:51.762212   13081 start.go:240] waiting for startup goroutines ...
	I0313 23:28:51.867248   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:51.982559   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:51.984472   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:52.155167   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:52.368293   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:52.481385   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:52.483741   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:52.653244   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:52.869539   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:52.983823   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:52.987001   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:53.153919   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:53.370703   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:53.483331   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:53.483476   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:53.653014   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:53.868089   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:54.115629   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:54.115669   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:54.154752   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:54.367920   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:54.480695   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:54.483395   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:54.653214   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:54.867212   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:54.982233   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:54.983895   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:55.153049   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:55.367203   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:55.481891   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:28:55.483377   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:55.653939   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:55.867281   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:55.987460   13081 kapi.go:107] duration metric: took 37.515540821s to wait for kubernetes.io/minikube-addons=registry ...
	I0313 23:28:55.988002   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:56.152782   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:56.368029   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:56.482639   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:56.653068   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:56.867499   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:56.982937   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:57.154354   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:57.368185   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:57.482701   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:57.654323   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:57.867693   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:57.982931   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:58.152794   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:58.367604   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:58.483524   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:58.653310   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:58.868557   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:58.982050   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:59.153143   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:59.367709   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:59.482431   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:28:59.656654   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:28:59.867386   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:28:59.982636   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:00.152740   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:00.368083   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:00.482591   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:00.652782   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:00.869232   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:00.983007   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:01.153230   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:01.367471   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:01.482695   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:01.652961   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:01.866867   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:01.982139   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:02.154091   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:02.368490   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:02.486131   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:02.655711   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:02.867600   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:02.982057   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:03.153103   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:03.368271   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:03.485345   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:03.653393   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:03.876826   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:04.389749   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:04.389834   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:04.391696   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:04.483773   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:04.653827   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:04.869051   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:04.982413   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:05.156087   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:05.367105   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:05.482829   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:05.652766   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:05.867903   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:05.982605   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:06.153345   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:06.369585   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:06.482504   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:06.653997   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:06.867847   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:06.982554   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:07.152624   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:07.369711   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:07.483068   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:07.653297   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:07.867180   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:07.985120   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:08.153256   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:08.368465   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:08.483690   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:08.652569   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:08.867946   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:08.982655   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:09.152981   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:09.374075   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:09.482554   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:09.652381   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:09.867917   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:09.982788   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:10.153146   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:10.368086   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:10.483359   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:10.652399   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:10.868448   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:10.982264   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:11.153900   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:11.367021   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:11.482671   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:11.652917   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:11.867239   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:11.983024   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:12.153211   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:12.374262   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:12.482710   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:12.653094   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:12.870420   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:12.983256   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:13.153698   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:13.367850   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:13.482944   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:13.655673   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:13.867923   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:13.982507   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:14.155023   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:14.367395   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:14.484568   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:14.653496   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:14.868855   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:14.983239   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:15.153457   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:15.371245   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:15.487251   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:15.827548   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:15.867952   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:15.983351   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:16.153719   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:16.368620   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:16.483687   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:16.653447   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:16.870145   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:16.985946   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:17.158658   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:17.369540   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:17.482612   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:17.653950   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:17.868287   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:17.983161   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:18.154723   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:18.368683   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:18.482318   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:18.653280   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:18.867789   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:18.983280   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:19.153541   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:19.371529   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:19.485717   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:19.652480   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:19.870392   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:19.983136   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:20.153654   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:20.388368   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:20.496630   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:20.653019   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:20.870417   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:20.983212   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:21.153146   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:21.370462   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:21.482707   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:21.652972   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:21.867470   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:21.983639   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:22.152807   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:22.368436   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:22.484012   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:22.653958   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:22.868048   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:22.983066   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:23.154310   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:23.369522   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:23.484668   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:23.652808   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:23.868840   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:23.985225   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:24.153274   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:24.370658   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:24.495302   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:24.658154   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:24.869069   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:24.982945   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:25.155329   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:25.373032   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:25.483538   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:25.653998   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:25.869008   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:25.982532   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:26.153394   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:26.369325   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:26.483934   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:26.652679   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:26.868674   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:26.983074   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:27.152738   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:27.370272   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:27.483070   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:27.654272   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:27.867777   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:27.982443   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:28.153208   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:28.367083   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:28.482103   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:28.654990   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:28.867275   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:28.983213   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:29.154417   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:29.368014   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:29.484412   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:30.028104   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:30.028970   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:30.030663   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:30.153750   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:30.368630   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:30.483713   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:30.653261   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:30.869174   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:30.982870   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:31.152939   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:31.367020   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:31.482181   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:31.653531   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:31.868622   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:31.982354   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:32.154941   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:32.367479   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:32.483547   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:32.653094   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:32.868421   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:32.983104   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:33.155623   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:33.369246   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:33.482738   13081 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:29:33.652587   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:33.868322   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:33.982445   13081 kapi.go:107] duration metric: took 1m15.505730166s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0313 23:29:34.153969   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:34.370613   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:34.654619   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:34.868547   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:35.153292   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:35.367700   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:35.656382   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:35.868595   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:36.153673   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:36.368646   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:36.652596   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:36.868912   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:37.153622   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:37.370808   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:37.653539   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:37.868025   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:38.153943   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:38.368496   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:38.653793   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:29:38.868600   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:39.155579   13081 kapi.go:107] duration metric: took 1m17.006700496s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0313 23:29:39.157082   13081 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-524943 cluster.
	I0313 23:29:39.158478   13081 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0313 23:29:39.159789   13081 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0313 23:29:39.368471   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:39.868658   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:40.368771   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:40.870168   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:41.368909   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:41.869471   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:42.367626   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:42.918909   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:43.368290   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:43.868506   13081 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:29:44.370505   13081 kapi.go:107] duration metric: took 1m24.508717927s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0313 23:29:44.372556   13081 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, metrics-server, ingress-dns, nvidia-device-plugin, inspektor-gadget, yakd, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0313 23:29:44.374090   13081 addons.go:505] duration metric: took 1m36.055287829s for enable addons: enabled=[cloud-spanner storage-provisioner metrics-server ingress-dns nvidia-device-plugin inspektor-gadget yakd helm-tiller storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0313 23:29:44.374139   13081 start.go:245] waiting for cluster config update ...
	I0313 23:29:44.374156   13081 start.go:254] writing updated cluster config ...
	I0313 23:29:44.374401   13081 ssh_runner.go:195] Run: rm -f paused
	I0313 23:29:44.433732   13081 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0313 23:29:44.435844   13081 out.go:177] * Done! kubectl is now configured to use "addons-524943" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.558061706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710372750558033080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563306,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de361e0d-fcc7-4baf-8718-16d555fd4c5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.558640246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0e36f6e-fb9e-4f52-a908-927b7926ab45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.558692578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0e36f6e-fb9e-4f52-a908-927b7926ab45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.559014429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f03a62a32f6609b98f4642213e97d7bcf105c7a68b44352c513cb4c1bd04b28,PodSandboxId:bb233438668fce866a8bacb69da9e31a85b3e4c7aa0e5e411d7ac6938065616f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710372743115899353,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-lncdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83487913-3fa5-407a-99c1-1841716f5b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b98cda7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d644e38adc7bf6b8222f9e2408848cf777dc067e41432ef7ecf08a1eb915d1d4,PodSandboxId:c4d817ef344a13608f59ceae9fb3277ce3f7d55715e3e2d468b1f3e38d1ffc08,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710372638033032855,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-cl9dl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a4001a0c-3b65-4899-acc4-883f7c9ca10a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c8e1cb70,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d689677298433450464f4b8c94df94d38364a7d031160f63cf20530c608d10,PodSandboxId:3aba70fafaabaa611f9ace26a143f747b606c4f47db27bae2b3e0bab741ad663,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710372603650690536,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9cbd44b7-07b2-4686-8df2-24235a9fafde,},Annotations:map[string]string{io.kubernetes.container.hash: 6b464916,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f29667bd74db21acbfebb2fa7fbb3e44f2104022c72129997217e841336ce5,PodSandboxId:b80827e8bfdccfa14bca9686711af8b2fb5ecabd55c0e4779a06c17ba446d42d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710372577871178721,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-7425s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6c21035a-0a18-4325-8618-15088147d098,},Annotations:map[string]string{io.kubernetes.container.hash: d3c30494,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866b1e8a693063b1d837268f59c325687a1d8704deb9bab9a30cc607e6b0648,PodSandboxId:7c85ab51e32dbcf0568c9e14291221e4e4540fcb8d5f39031a34185d6c8e9c6f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710372556924035155,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9v48,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 145778a3-78a3-4092-97c4-64de0efc4f0c,},Annotations:map[string]string{io.kubernetes.container.hash: a7d850ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ef2f4db01fa771f4bcb7ce6773694f6e8c28a465f2beec8cad1eb4dc619bdd,PodSandboxId:ae19e8eb59bb8f495b65c10f85fa282df36441820ea2a9acc5475f91b2db84da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710372556785511777,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8rt8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d8ec0402-0928-44c5-a827-d96b3e00356f,},Annotations:map[string]string{io.kubernetes.container.hash: a6df46c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0adce48302e5b0b7c9b3474467c5c9765c7e3ddb2173ff10595bb799f9d4706,PodSandboxId:1be86ba8f37aad24d97898452ea550dcf1274299aa42cadd8476274e1b2fe162,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710372551320116005,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-skjc9,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e124fdac-4a7a-4cd3-8f6e-144e97cb825e,},Annotations:map[string]string{io.kubernetes.container.hash: 880edacc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf69a8622652d555010ed2793ac758f521847bd2f651176b64721cd2b4ec327,PodSandboxId:e05bea9cfb99d565b7c2214347fb7c4f0d0d83fbb5f6e4e64919a7ec879f0765,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710372497253060912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b50953-304b-4510-97f8-de1781f722c7,},Annotations:map[string]string{io.kubernetes.container.hash: ec299578,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e09fb48daf82f5302511d580f9f00d5e46f9d2a319ce8ce39e39c97f6144201e,PodSandboxId:8f0635b217a1b491cf754bc5067137df415b5d489fea0070574895639b236ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710372491543565007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k4p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dca802b0-35f0-4fe2-9a93-83183585beea,},Annotations:map[string]string{io.kubernetes.container.hash: 805c2895,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e47dbfab1894008e02a511a4cac5280d59467437ad99f3c71b4868164f1b2c9,PodSandboxId:6e8d70e9c6def010dc56adfc5828a8ce226d17f146a02e75d3f3446805f638
fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710372490360626680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bng88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0067f4-d8a6-4728-8e4a-b42fab6607ae,},Annotations:map[string]string{io.kubernetes.container.hash: ddee31a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a500a4a68293f10d13d338fe2a1ccd0d67c006ceefb9ec821fb50d933ce4e2,PodSandboxId:23c26fd9ba56fb67607b199397d7bfa9d41245fb9329206daef4bac84f4f0adf,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710372469658906276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94928c5d00882ab7091ebdfd3c1d0346,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0946b77ce3ad69881191818915917931771bf411364682b1118f0165eaaec77d,PodSandboxId:20d95887a833e4dedf94c6e279e130be3ce9e3e503a9458d692a3538d3224a93,Metadata:&ContainerM
etadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710372469636006179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d5f1584b9027c959152e69fc53da5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6fb9563664fb9f51e4ab09bd318acf410c320362fa90b9b0ca9610b4716505,PodSandboxId:b6ab067e394f781edfbc238d6f98b1e1e033ff4c411bf35d123bfcf14d563d47,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710372469539529816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4980b55ce21d48df4915aca0cab4aacd,},Annotations:map[string]string{io.kubernetes.container.hash: 57f0dd33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1ae6fc9f449565abfe8e1dd0b9b2908eac241bb8ed5b8a3ae3407446fda209,PodSandboxId:3c10fe470217032b6e890b07880f53c062d9ecd21dbc06617af657241865287c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710372469537148619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e7eaa44f3a9a8847b8194e2777b621b,},Annotations:map[string]string{io.kubernetes.container.hash: 77bcd14a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0e36f6e-fb9e-4f52-a908-927b7926ab45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.601031222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33f71a7e-cb2b-40b8-88be-1f3d213619fc name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.601105384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33f71a7e-cb2b-40b8-88be-1f3d213619fc name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.602597015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7612569b-3fba-40fa-8c74-17a866f030a7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.603862857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710372750603794854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563306,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7612569b-3fba-40fa-8c74-17a866f030a7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.605043377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d874acf3-2085-4ee5-bac8-83f8e7addff3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.605094581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d874acf3-2085-4ee5-bac8-83f8e7addff3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.605510430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f03a62a32f6609b98f4642213e97d7bcf105c7a68b44352c513cb4c1bd04b28,PodSandboxId:bb233438668fce866a8bacb69da9e31a85b3e4c7aa0e5e411d7ac6938065616f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710372743115899353,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-lncdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83487913-3fa5-407a-99c1-1841716f5b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b98cda7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d644e38adc7bf6b8222f9e2408848cf777dc067e41432ef7ecf08a1eb915d1d4,PodSandboxId:c4d817ef344a13608f59ceae9fb3277ce3f7d55715e3e2d468b1f3e38d1ffc08,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710372638033032855,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-cl9dl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a4001a0c-3b65-4899-acc4-883f7c9ca10a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c8e1cb70,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d689677298433450464f4b8c94df94d38364a7d031160f63cf20530c608d10,PodSandboxId:3aba70fafaabaa611f9ace26a143f747b606c4f47db27bae2b3e0bab741ad663,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710372603650690536,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9cbd44b7-07b2-4686-8df2-24235a9fafde,},Annotations:map[string]string{io.kubernetes.container.hash: 6b464916,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f29667bd74db21acbfebb2fa7fbb3e44f2104022c72129997217e841336ce5,PodSandboxId:b80827e8bfdccfa14bca9686711af8b2fb5ecabd55c0e4779a06c17ba446d42d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710372577871178721,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-7425s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6c21035a-0a18-4325-8618-15088147d098,},Annotations:map[string]string{io.kubernetes.container.hash: d3c30494,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866b1e8a693063b1d837268f59c325687a1d8704deb9bab9a30cc607e6b0648,PodSandboxId:7c85ab51e32dbcf0568c9e14291221e4e4540fcb8d5f39031a34185d6c8e9c6f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710372556924035155,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9v48,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 145778a3-78a3-4092-97c4-64de0efc4f0c,},Annotations:map[string]string{io.kubernetes.container.hash: a7d850ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ef2f4db01fa771f4bcb7ce6773694f6e8c28a465f2beec8cad1eb4dc619bdd,PodSandboxId:ae19e8eb59bb8f495b65c10f85fa282df36441820ea2a9acc5475f91b2db84da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710372556785511777,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8rt8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d8ec0402-0928-44c5-a827-d96b3e00356f,},Annotations:map[string]string{io.kubernetes.container.hash: a6df46c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0adce48302e5b0b7c9b3474467c5c9765c7e3ddb2173ff10595bb799f9d4706,PodSandboxId:1be86ba8f37aad24d97898452ea550dcf1274299aa42cadd8476274e1b2fe162,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710372551320116005,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-skjc9,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e124fdac-4a7a-4cd3-8f6e-144e97cb825e,},Annotations:map[string]string{io.kubernetes.container.hash: 880edacc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf69a8622652d555010ed2793ac758f521847bd2f651176b64721cd2b4ec327,PodSandboxId:e05bea9cfb99d565b7c2214347fb7c4f0d0d83fbb5f6e4e64919a7ec879f0765,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710372497253060912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b50953-304b-4510-97f8-de1781f722c7,},Annotations:map[string]string{io.kubernetes.container.hash: ec299578,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e09fb48daf82f5302511d580f9f00d5e46f9d2a319ce8ce39e39c97f6144201e,PodSandboxId:8f0635b217a1b491cf754bc5067137df415b5d489fea0070574895639b236ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710372491543565007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k4p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dca802b0-35f0-4fe2-9a93-83183585beea,},Annotations:map[string]string{io.kubernetes.container.hash: 805c2895,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e47dbfab1894008e02a511a4cac5280d59467437ad99f3c71b4868164f1b2c9,PodSandboxId:6e8d70e9c6def010dc56adfc5828a8ce226d17f146a02e75d3f3446805f638
fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710372490360626680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bng88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0067f4-d8a6-4728-8e4a-b42fab6607ae,},Annotations:map[string]string{io.kubernetes.container.hash: ddee31a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a500a4a68293f10d13d338fe2a1ccd0d67c006ceefb9ec821fb50d933ce4e2,PodSandboxId:23c26fd9ba56fb67607b199397d7bfa9d41245fb9329206daef4bac84f4f0adf,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710372469658906276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94928c5d00882ab7091ebdfd3c1d0346,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0946b77ce3ad69881191818915917931771bf411364682b1118f0165eaaec77d,PodSandboxId:20d95887a833e4dedf94c6e279e130be3ce9e3e503a9458d692a3538d3224a93,Metadata:&ContainerM
etadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710372469636006179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d5f1584b9027c959152e69fc53da5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6fb9563664fb9f51e4ab09bd318acf410c320362fa90b9b0ca9610b4716505,PodSandboxId:b6ab067e394f781edfbc238d6f98b1e1e033ff4c411bf35d123bfcf14d563d47,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710372469539529816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4980b55ce21d48df4915aca0cab4aacd,},Annotations:map[string]string{io.kubernetes.container.hash: 57f0dd33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1ae6fc9f449565abfe8e1dd0b9b2908eac241bb8ed5b8a3ae3407446fda209,PodSandboxId:3c10fe470217032b6e890b07880f53c062d9ecd21dbc06617af657241865287c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710372469537148619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e7eaa44f3a9a8847b8194e2777b621b,},Annotations:map[string]string{io.kubernetes.container.hash: 77bcd14a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d874acf3-2085-4ee5-bac8-83f8e7addff3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.643421095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=770c213a-48c8-468e-a743-b63b5b6c3871 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.643506792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=770c213a-48c8-468e-a743-b63b5b6c3871 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.644874242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17c8e017-7551-4a1e-acad-025c941ce13c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.646292989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710372750646264414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563306,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17c8e017-7551-4a1e-acad-025c941ce13c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.647122163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6e8791b-ad0a-4343-923c-a92ff5551841 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.647175213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6e8791b-ad0a-4343-923c-a92ff5551841 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.647637138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f03a62a32f6609b98f4642213e97d7bcf105c7a68b44352c513cb4c1bd04b28,PodSandboxId:bb233438668fce866a8bacb69da9e31a85b3e4c7aa0e5e411d7ac6938065616f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710372743115899353,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-lncdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83487913-3fa5-407a-99c1-1841716f5b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b98cda7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d644e38adc7bf6b8222f9e2408848cf777dc067e41432ef7ecf08a1eb915d1d4,PodSandboxId:c4d817ef344a13608f59ceae9fb3277ce3f7d55715e3e2d468b1f3e38d1ffc08,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710372638033032855,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-cl9dl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a4001a0c-3b65-4899-acc4-883f7c9ca10a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c8e1cb70,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d689677298433450464f4b8c94df94d38364a7d031160f63cf20530c608d10,PodSandboxId:3aba70fafaabaa611f9ace26a143f747b606c4f47db27bae2b3e0bab741ad663,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710372603650690536,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9cbd44b7-07b2-4686-8df2-24235a9fafde,},Annotations:map[string]string{io.kubernetes.container.hash: 6b464916,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f29667bd74db21acbfebb2fa7fbb3e44f2104022c72129997217e841336ce5,PodSandboxId:b80827e8bfdccfa14bca9686711af8b2fb5ecabd55c0e4779a06c17ba446d42d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710372577871178721,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-7425s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6c21035a-0a18-4325-8618-15088147d098,},Annotations:map[string]string{io.kubernetes.container.hash: d3c30494,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866b1e8a693063b1d837268f59c325687a1d8704deb9bab9a30cc607e6b0648,PodSandboxId:7c85ab51e32dbcf0568c9e14291221e4e4540fcb8d5f39031a34185d6c8e9c6f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710372556924035155,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9v48,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 145778a3-78a3-4092-97c4-64de0efc4f0c,},Annotations:map[string]string{io.kubernetes.container.hash: a7d850ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ef2f4db01fa771f4bcb7ce6773694f6e8c28a465f2beec8cad1eb4dc619bdd,PodSandboxId:ae19e8eb59bb8f495b65c10f85fa282df36441820ea2a9acc5475f91b2db84da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710372556785511777,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8rt8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d8ec0402-0928-44c5-a827-d96b3e00356f,},Annotations:map[string]string{io.kubernetes.container.hash: a6df46c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0adce48302e5b0b7c9b3474467c5c9765c7e3ddb2173ff10595bb799f9d4706,PodSandboxId:1be86ba8f37aad24d97898452ea550dcf1274299aa42cadd8476274e1b2fe162,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710372551320116005,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-skjc9,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e124fdac-4a7a-4cd3-8f6e-144e97cb825e,},Annotations:map[string]string{io.kubernetes.container.hash: 880edacc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf69a8622652d555010ed2793ac758f521847bd2f651176b64721cd2b4ec327,PodSandboxId:e05bea9cfb99d565b7c2214347fb7c4f0d0d83fbb5f6e4e64919a7ec879f0765,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710372497253060912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b50953-304b-4510-97f8-de1781f722c7,},Annotations:map[string]string{io.kubernetes.container.hash: ec299578,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e09fb48daf82f5302511d580f9f00d5e46f9d2a319ce8ce39e39c97f6144201e,PodSandboxId:8f0635b217a1b491cf754bc5067137df415b5d489fea0070574895639b236ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710372491543565007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k4p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dca802b0-35f0-4fe2-9a93-83183585beea,},Annotations:map[string]string{io.kubernetes.container.hash: 805c2895,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e47dbfab1894008e02a511a4cac5280d59467437ad99f3c71b4868164f1b2c9,PodSandboxId:6e8d70e9c6def010dc56adfc5828a8ce226d17f146a02e75d3f3446805f638
fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710372490360626680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bng88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0067f4-d8a6-4728-8e4a-b42fab6607ae,},Annotations:map[string]string{io.kubernetes.container.hash: ddee31a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a500a4a68293f10d13d338fe2a1ccd0d67c006ceefb9ec821fb50d933ce4e2,PodSandboxId:23c26fd9ba56fb67607b199397d7bfa9d41245fb9329206daef4bac84f4f0adf,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710372469658906276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94928c5d00882ab7091ebdfd3c1d0346,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0946b77ce3ad69881191818915917931771bf411364682b1118f0165eaaec77d,PodSandboxId:20d95887a833e4dedf94c6e279e130be3ce9e3e503a9458d692a3538d3224a93,Metadata:&ContainerM
etadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710372469636006179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d5f1584b9027c959152e69fc53da5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6fb9563664fb9f51e4ab09bd318acf410c320362fa90b9b0ca9610b4716505,PodSandboxId:b6ab067e394f781edfbc238d6f98b1e1e033ff4c411bf35d123bfcf14d563d47,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710372469539529816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4980b55ce21d48df4915aca0cab4aacd,},Annotations:map[string]string{io.kubernetes.container.hash: 57f0dd33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1ae6fc9f449565abfe8e1dd0b9b2908eac241bb8ed5b8a3ae3407446fda209,PodSandboxId:3c10fe470217032b6e890b07880f53c062d9ecd21dbc06617af657241865287c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710372469537148619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e7eaa44f3a9a8847b8194e2777b621b,},Annotations:map[string]string{io.kubernetes.container.hash: 77bcd14a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6e8791b-ad0a-4343-923c-a92ff5551841 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.693178395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5e27ab4-76db-4052-9122-a0a553de467b name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.693323674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5e27ab4-76db-4052-9122-a0a553de467b name=/runtime.v1.RuntimeService/Version
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.694855362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5d0ca91-ea9f-4d15-9e9e-a21277934e27 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.696061945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710372750696037534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563306,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5d0ca91-ea9f-4d15-9e9e-a21277934e27 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.696725940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7793ad4a-0ef0-4a54-9906-5d5b3bb2eaca name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.696800226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7793ad4a-0ef0-4a54-9906-5d5b3bb2eaca name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:32:30 addons-524943 crio[680]: time="2024-03-13 23:32:30.697104420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f03a62a32f6609b98f4642213e97d7bcf105c7a68b44352c513cb4c1bd04b28,PodSandboxId:bb233438668fce866a8bacb69da9e31a85b3e4c7aa0e5e411d7ac6938065616f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710372743115899353,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-lncdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83487913-3fa5-407a-99c1-1841716f5b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b98cda7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d644e38adc7bf6b8222f9e2408848cf777dc067e41432ef7ecf08a1eb915d1d4,PodSandboxId:c4d817ef344a13608f59ceae9fb3277ce3f7d55715e3e2d468b1f3e38d1ffc08,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710372638033032855,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-cl9dl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a4001a0c-3b65-4899-acc4-883f7c9ca10a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c8e1cb70,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d689677298433450464f4b8c94df94d38364a7d031160f63cf20530c608d10,PodSandboxId:3aba70fafaabaa611f9ace26a143f747b606c4f47db27bae2b3e0bab741ad663,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710372603650690536,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9cbd44b7-07b2-4686-8df2-24235a9fafde,},Annotations:map[string]string{io.kubernetes.container.hash: 6b464916,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f29667bd74db21acbfebb2fa7fbb3e44f2104022c72129997217e841336ce5,PodSandboxId:b80827e8bfdccfa14bca9686711af8b2fb5ecabd55c0e4779a06c17ba446d42d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710372577871178721,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-7425s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6c21035a-0a18-4325-8618-15088147d098,},Annotations:map[string]string{io.kubernetes.container.hash: d3c30494,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866b1e8a693063b1d837268f59c325687a1d8704deb9bab9a30cc607e6b0648,PodSandboxId:7c85ab51e32dbcf0568c9e14291221e4e4540fcb8d5f39031a34185d6c8e9c6f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710372556924035155,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9v48,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 145778a3-78a3-4092-97c4-64de0efc4f0c,},Annotations:map[string]string{io.kubernetes.container.hash: a7d850ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ef2f4db01fa771f4bcb7ce6773694f6e8c28a465f2beec8cad1eb4dc619bdd,PodSandboxId:ae19e8eb59bb8f495b65c10f85fa282df36441820ea2a9acc5475f91b2db84da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710372556785511777,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8rt8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d8ec0402-0928-44c5-a827-d96b3e00356f,},Annotations:map[string]string{io.kubernetes.container.hash: a6df46c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0adce48302e5b0b7c9b3474467c5c9765c7e3ddb2173ff10595bb799f9d4706,PodSandboxId:1be86ba8f37aad24d97898452ea550dcf1274299aa42cadd8476274e1b2fe162,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710372551320116005,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-skjc9,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e124fdac-4a7a-4cd3-8f6e-144e97cb825e,},Annotations:map[string]string{io.kubernetes.container.hash: 880edacc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf69a8622652d555010ed2793ac758f521847bd2f651176b64721cd2b4ec327,PodSandboxId:e05bea9cfb99d565b7c2214347fb7c4f0d0d83fbb5f6e4e64919a7ec879f0765,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710372497253060912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b50953-304b-4510-97f8-de1781f722c7,},Annotations:map[string]string{io.kubernetes.container.hash: ec299578,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e09fb48daf82f5302511d580f9f00d5e46f9d2a319ce8ce39e39c97f6144201e,PodSandboxId:8f0635b217a1b491cf754bc5067137df415b5d489fea0070574895639b236ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710372491543565007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k4p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dca802b0-35f0-4fe2-9a93-83183585beea,},Annotations:map[string]string{io.kubernetes.container.hash: 805c2895,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e47dbfab1894008e02a511a4cac5280d59467437ad99f3c71b4868164f1b2c9,PodSandboxId:6e8d70e9c6def010dc56adfc5828a8ce226d17f146a02e75d3f3446805f638
fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710372490360626680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bng88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0067f4-d8a6-4728-8e4a-b42fab6607ae,},Annotations:map[string]string{io.kubernetes.container.hash: ddee31a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a500a4a68293f10d13d338fe2a1ccd0d67c006ceefb9ec821fb50d933ce4e2,PodSandboxId:23c26fd9ba56fb67607b199397d7bfa9d41245fb9329206daef4bac84f4f0adf,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710372469658906276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94928c5d00882ab7091ebdfd3c1d0346,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0946b77ce3ad69881191818915917931771bf411364682b1118f0165eaaec77d,PodSandboxId:20d95887a833e4dedf94c6e279e130be3ce9e3e503a9458d692a3538d3224a93,Metadata:&ContainerM
etadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710372469636006179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d5f1584b9027c959152e69fc53da5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6fb9563664fb9f51e4ab09bd318acf410c320362fa90b9b0ca9610b4716505,PodSandboxId:b6ab067e394f781edfbc238d6f98b1e1e033ff4c411bf35d123bfcf14d563d47,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710372469539529816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4980b55ce21d48df4915aca0cab4aacd,},Annotations:map[string]string{io.kubernetes.container.hash: 57f0dd33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1ae6fc9f449565abfe8e1dd0b9b2908eac241bb8ed5b8a3ae3407446fda209,PodSandboxId:3c10fe470217032b6e890b07880f53c062d9ecd21dbc06617af657241865287c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710372469537148619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-524943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e7eaa44f3a9a8847b8194e2777b621b,},Annotations:map[string]string{io.kubernetes.container.hash: 77bcd14a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7793ad4a-0ef0-4a54-9906-5d5b3bb2eaca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7f03a62a32f66       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago        Running             hello-world-app           0                   bb233438668fc       hello-world-app-5d77478584-lncdm
	d644e38adc7bf       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        About a minute ago   Running             headlamp                  0                   c4d817ef344a1       headlamp-5485c556b-cl9dl
	59d6896772984       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago        Running             nginx                     0                   3aba70fafaaba       nginx
	73f29667bd74d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 2 minutes ago        Running             gcp-auth                  0                   b80827e8bfdcc       gcp-auth-5f6b4f85fd-7425s
	b866b1e8a6930       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago        Exited              patch                     0                   7c85ab51e32db       ingress-nginx-admission-patch-h9v48
	b9ef2f4db01fa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago        Exited              create                    0                   ae19e8eb59bb8       ingress-nginx-admission-create-8rt8v
	b0adce48302e5       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago        Running             yakd                      0                   1be86ba8f37aa       yakd-dashboard-9947fc6bf-skjc9
	fcf69a8622652       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago        Running             storage-provisioner       0                   e05bea9cfb99d       storage-provisioner
	e09fb48daf82f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago        Running             coredns                   0                   8f0635b217a1b       coredns-5dd5756b68-k4p5l
	7e47dbfab1894       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago        Running             kube-proxy                0                   6e8d70e9c6def       kube-proxy-bng88
	c2a500a4a6829       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago        Running             kube-controller-manager   0                   23c26fd9ba56f       kube-controller-manager-addons-524943
	0946b77ce3ad6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago        Running             kube-scheduler            0                   20d95887a833e       kube-scheduler-addons-524943
	5e6fb9563664f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago        Running             etcd                      0                   b6ab067e394f7       etcd-addons-524943
	cb1ae6fc9f449       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago        Running             kube-apiserver            0                   3c10fe4702170       kube-apiserver-addons-524943
	
	
	==> coredns [e09fb48daf82f5302511d580f9f00d5e46f9d2a319ce8ce39e39c97f6144201e] <==
	[INFO] 10.244.0.7:56302 - 22422 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156973s
	[INFO] 10.244.0.7:42090 - 34457 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086703s
	[INFO] 10.244.0.7:42090 - 58267 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025308s
	[INFO] 10.244.0.7:37725 - 40812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000037191s
	[INFO] 10.244.0.7:37725 - 35438 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031725s
	[INFO] 10.244.0.7:38931 - 56628 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000040008s
	[INFO] 10.244.0.7:38931 - 19251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070462s
	[INFO] 10.244.0.7:39200 - 47744 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080238s
	[INFO] 10.244.0.7:39200 - 9860 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054332s
	[INFO] 10.244.0.7:49073 - 42483 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058522s
	[INFO] 10.244.0.7:49073 - 13308 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064838s
	[INFO] 10.244.0.7:49746 - 39802 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060389s
	[INFO] 10.244.0.7:49746 - 18808 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049224s
	[INFO] 10.244.0.7:58694 - 52770 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176374s
	[INFO] 10.244.0.7:58694 - 56352 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045963s
	[INFO] 10.244.0.22:52474 - 9205 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000824693s
	[INFO] 10.244.0.22:40846 - 29296 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000367225s
	[INFO] 10.244.0.22:47028 - 31201 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011576s
	[INFO] 10.244.0.22:38544 - 5870 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148625s
	[INFO] 10.244.0.22:37496 - 64639 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130345s
	[INFO] 10.244.0.22:49793 - 38865 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000225919s
	[INFO] 10.244.0.22:47301 - 18224 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001256707s
	[INFO] 10.244.0.22:50433 - 3377 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001283433s
	[INFO] 10.244.0.24:54599 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000372155s
	[INFO] 10.244.0.24:56762 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000206793s
	
	
	==> describe nodes <==
	Name:               addons-524943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-524943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=addons-524943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_27_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-524943
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:27:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-524943
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:32:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:30:59 +0000   Wed, 13 Mar 2024 23:27:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:30:59 +0000   Wed, 13 Mar 2024 23:27:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:30:59 +0000   Wed, 13 Mar 2024 23:27:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:30:59 +0000   Wed, 13 Mar 2024 23:27:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    addons-524943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 7729e77cb77b41a793fd536ad7837d7e
	  System UUID:                7729e77c-b77b-41a7-93fd-536ad7837d7e
	  Boot ID:                    3157e9cd-2891-4d18-a4df-2afc55805e6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-lncdm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-7425s                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  headlamp                    headlamp-5485c556b-cl9dl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 coredns-5dd5756b68-k4p5l                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m21s
	  kube-system                 etcd-addons-524943                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-apiserver-addons-524943             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-addons-524943    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-bng88                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-scheduler-addons-524943             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-skjc9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m18s  kube-proxy       
	  Normal  Starting                 4m35s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m35s  kubelet          Node addons-524943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s  kubelet          Node addons-524943 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s  kubelet          Node addons-524943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m35s  kubelet          Node addons-524943 status is now: NodeReady
	  Normal  RegisteredNode           4m22s  node-controller  Node addons-524943 event: Registered Node addons-524943 in Controller
	
	
	==> dmesg <==
	[Mar13 23:28] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +0.156821] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.402759] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.031513] kauditd_printk_skb: 136 callbacks suppressed
	[  +7.224200] kauditd_printk_skb: 64 callbacks suppressed
	[ +22.547491] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.325685] kauditd_printk_skb: 4 callbacks suppressed
	[Mar13 23:29] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.470697] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.071494] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.995858] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.038228] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.103437] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.841222] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.261728] kauditd_printk_skb: 15 callbacks suppressed
	[Mar13 23:30] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.710769] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.475957] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.114063] kauditd_printk_skb: 22 callbacks suppressed
	[ +19.474403] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.657698] kauditd_printk_skb: 6 callbacks suppressed
	[Mar13 23:31] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.560205] kauditd_printk_skb: 25 callbacks suppressed
	[Mar13 23:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.700187] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [5e6fb9563664fb9f51e4ab09bd318acf410c320362fa90b9b0ca9610b4716505] <==
	{"level":"info","ts":"2024-03-13T23:29:15.814551Z","caller":"traceutil/trace.go:171","msg":"trace[22170676] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1017; }","duration":"173.361897ms","start":"2024-03-13T23:29:15.641184Z","end":"2024-03-13T23:29:15.814546Z","steps":["trace[22170676] 'range keys from in-memory index tree'  (duration: 173.065597ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:29:30.009871Z","caller":"traceutil/trace.go:171","msg":"trace[1237500578] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1143; }","duration":"369.393229ms","start":"2024-03-13T23:29:29.640463Z","end":"2024-03-13T23:29:30.009856Z","steps":["trace[1237500578] 'read index received'  (duration: 369.214804ms)","trace[1237500578] 'applied index is now lower than readState.Index'  (duration: 177.928µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-13T23:29:30.009992Z","caller":"traceutil/trace.go:171","msg":"trace[2101019771] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"472.536164ms","start":"2024-03-13T23:29:29.537448Z","end":"2024-03-13T23:29:30.009984Z","steps":["trace[2101019771] 'process raft request'  (duration: 472.275785ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:30.010101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:29:29.537432Z","time spent":"472.580214ms","remote":"127.0.0.1:57680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1106 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-13T23:29:30.010296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.84791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10811"}
	{"level":"info","ts":"2024-03-13T23:29:30.010356Z","caller":"traceutil/trace.go:171","msg":"trace[42828493] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1112; }","duration":"369.910736ms","start":"2024-03-13T23:29:29.640438Z","end":"2024-03-13T23:29:30.010349Z","steps":["trace[42828493] 'agreement among raft nodes before linearized reading'  (duration: 369.809467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:30.010401Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:29:29.640413Z","time spent":"369.981155ms","remote":"127.0.0.1:57698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10834,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-03-13T23:29:30.01056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.862742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-03-13T23:29:30.010604Z","caller":"traceutil/trace.go:171","msg":"trace[1796899950] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1112; }","duration":"230.90646ms","start":"2024-03-13T23:29:29.779691Z","end":"2024-03-13T23:29:30.010597Z","steps":["trace[1796899950] 'agreement among raft nodes before linearized reading'  (duration: 230.840883ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:30.011016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.375352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81505"}
	{"level":"info","ts":"2024-03-13T23:29:30.011068Z","caller":"traceutil/trace.go:171","msg":"trace[453275340] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1112; }","duration":"158.433217ms","start":"2024-03-13T23:29:29.852629Z","end":"2024-03-13T23:29:30.011062Z","steps":["trace[453275340] 'agreement among raft nodes before linearized reading'  (duration: 158.265076ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:30.011665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.370716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-rh4fn.17bc76c9a9453656\" ","response":"range_response_count:1 size:779"}
	{"level":"info","ts":"2024-03-13T23:29:30.011713Z","caller":"traceutil/trace.go:171","msg":"trace[1879883285] range","detail":"{range_begin:/registry/events/gadget/gadget-rh4fn.17bc76c9a9453656; range_end:; response_count:1; response_revision:1112; }","duration":"184.891806ms","start":"2024-03-13T23:29:29.826815Z","end":"2024-03-13T23:29:30.011707Z","steps":["trace[1879883285] 'agreement among raft nodes before linearized reading'  (duration: 184.354469ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:29:42.894628Z","caller":"traceutil/trace.go:171","msg":"trace[290510089] linearizableReadLoop","detail":"{readStateIndex:1208; appliedIndex:1207; }","duration":"390.188052ms","start":"2024-03-13T23:29:42.504426Z","end":"2024-03-13T23:29:42.894614Z","steps":["trace[290510089] 'read index received'  (duration: 389.969864ms)","trace[290510089] 'applied index is now lower than readState.Index'  (duration: 217.559µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-13T23:29:42.894968Z","caller":"traceutil/trace.go:171","msg":"trace[168380952] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"458.565594ms","start":"2024-03-13T23:29:42.436392Z","end":"2024-03-13T23:29:42.894957Z","steps":["trace[168380952] 'process raft request'  (duration: 458.04792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:42.896046Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:29:42.436378Z","time spent":"459.615395ms","remote":"127.0.0.1:57600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":793,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-rh4fn.17bc76b97fc49ecd\" mod_revision:978 > success:<request_put:<key:\"/registry/events/gadget/gadget-rh4fn.17bc76b97fc49ecd\" value_size:722 lease:6038920530725993822 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-rh4fn.17bc76b97fc49ecd\" > >"}
	{"level":"warn","ts":"2024-03-13T23:29:42.89585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.414524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-13T23:29:42.896238Z","caller":"traceutil/trace.go:171","msg":"trace[798867352] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1174; }","duration":"391.773166ms","start":"2024-03-13T23:29:42.504407Z","end":"2024-03-13T23:29:42.89618Z","steps":["trace[798867352] 'agreement among raft nodes before linearized reading'  (duration: 391.245786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:42.896288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:29:42.504393Z","time spent":"391.885654ms","remote":"127.0.0.1:57538","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-13T23:29:42.899705Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.050608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-bxs2f\" ","response":"range_response_count:1 size:5896"}
	{"level":"info","ts":"2024-03-13T23:29:42.899846Z","caller":"traceutil/trace.go:171","msg":"trace[1616437097] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-bxs2f; range_end:; response_count:1; response_revision:1174; }","duration":"388.736397ms","start":"2024-03-13T23:29:42.511099Z","end":"2024-03-13T23:29:42.899835Z","steps":["trace[1616437097] 'agreement among raft nodes before linearized reading'  (duration: 383.921229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:29:42.899951Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:29:42.511086Z","time spent":"388.854737ms","remote":"127.0.0.1:57698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":5919,"request content":"key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-bxs2f\" "}
	{"level":"info","ts":"2024-03-13T23:31:19.607316Z","caller":"traceutil/trace.go:171","msg":"trace[555573040] linearizableReadLoop","detail":"{readStateIndex:1754; appliedIndex:1753; }","duration":"295.970999ms","start":"2024-03-13T23:31:19.311315Z","end":"2024-03-13T23:31:19.607286Z","steps":["trace[555573040] 'read index received'  (duration: 295.752424ms)","trace[555573040] 'applied index is now lower than readState.Index'  (duration: 218.044µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-13T23:31:19.607609Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.229527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:1594"}
	{"level":"info","ts":"2024-03-13T23:31:19.607651Z","caller":"traceutil/trace.go:171","msg":"trace[354263896] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1687; }","duration":"296.352707ms","start":"2024-03-13T23:31:19.31129Z","end":"2024-03-13T23:31:19.607643Z","steps":["trace[354263896] 'agreement among raft nodes before linearized reading'  (duration: 296.188958ms)"],"step_count":1}
	
	
	==> gcp-auth [73f29667bd74db21acbfebb2fa7fbb3e44f2104022c72129997217e841336ce5] <==
	2024/03/13 23:29:37 GCP Auth Webhook started!
	2024/03/13 23:29:50 Ready to marshal response ...
	2024/03/13 23:29:50 Ready to write response ...
	2024/03/13 23:29:56 Ready to marshal response ...
	2024/03/13 23:29:56 Ready to write response ...
	2024/03/13 23:29:57 Ready to marshal response ...
	2024/03/13 23:29:57 Ready to write response ...
	2024/03/13 23:30:02 Ready to marshal response ...
	2024/03/13 23:30:02 Ready to write response ...
	2024/03/13 23:30:03 Ready to marshal response ...
	2024/03/13 23:30:03 Ready to write response ...
	2024/03/13 23:30:16 Ready to marshal response ...
	2024/03/13 23:30:16 Ready to write response ...
	2024/03/13 23:30:21 Ready to marshal response ...
	2024/03/13 23:30:21 Ready to write response ...
	2024/03/13 23:30:21 Ready to marshal response ...
	2024/03/13 23:30:21 Ready to write response ...
	2024/03/13 23:30:21 Ready to marshal response ...
	2024/03/13 23:30:21 Ready to write response ...
	2024/03/13 23:30:46 Ready to marshal response ...
	2024/03/13 23:30:46 Ready to write response ...
	2024/03/13 23:31:16 Ready to marshal response ...
	2024/03/13 23:31:16 Ready to write response ...
	2024/03/13 23:32:19 Ready to marshal response ...
	2024/03/13 23:32:19 Ready to write response ...
	
	
	==> kernel <==
	 23:32:31 up 5 min,  0 users,  load average: 0.76, 1.02, 0.53
	Linux addons-524943 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cb1ae6fc9f449565abfe8e1dd0b9b2908eac241bb8ed5b8a3ae3407446fda209] <==
	W0313 23:29:59.156239       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0313 23:30:21.333031       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.163.230"}
	E0313 23:30:32.486862       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0313 23:30:44.872307       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0313 23:30:59.919015       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0313 23:31:33.118766       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.124712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.137067       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.137249       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.144693       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.144789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.166937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.166999       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.182557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.182675       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.203014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.203133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.218063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.218158       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0313 23:31:33.219598       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0313 23:31:33.219652       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0313 23:31:34.167534       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0313 23:31:34.220259       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0313 23:31:34.240882       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0313 23:32:20.137003       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.59.233"}
	
	
	==> kube-controller-manager [c2a500a4a68293f10d13d338fe2a1ccd0d67c006ceefb9ec821fb50d933ce4e2] <==
	W0313 23:31:50.975750       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:31:50.975801       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:31:53.127517       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:31:53.127629       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:31:54.096744       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:31:54.096773       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:32:09.334750       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:32:09.334869       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:32:10.037080       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:32:10.037182       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:32:14.513451       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:32:14.513559       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0313 23:32:14.741050       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0313 23:32:14.741152       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0313 23:32:19.944926       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0313 23:32:19.976853       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-lncdm"
	I0313 23:32:19.985120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.378011ms"
	I0313 23:32:20.008526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="23.30942ms"
	I0313 23:32:20.008706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.227µs"
	I0313 23:32:20.018516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="78.869µs"
	I0313 23:32:22.766961       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0313 23:32:22.780317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="6.837µs"
	I0313 23:32:22.786085       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:32:23.757897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.04474ms"
	I0313 23:32:23.758236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.931µs"
	
	
	==> kube-proxy [7e47dbfab1894008e02a511a4cac5280d59467437ad99f3c71b4868164f1b2c9] <==
	I0313 23:28:12.417970       1 server_others.go:69] "Using iptables proxy"
	I0313 23:28:12.436688       1 node.go:141] Successfully retrieved node IP: 192.168.39.37
	I0313 23:28:12.580705       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:28:12.580725       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:28:12.598954       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:28:12.599001       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:28:12.599159       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:28:12.599167       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:28:12.600335       1 config.go:188] "Starting service config controller"
	I0313 23:28:12.600344       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:28:12.600362       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:28:12.600365       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:28:12.600696       1 config.go:315] "Starting node config controller"
	I0313 23:28:12.600702       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:28:12.700973       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:28:12.701068       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:28:12.701092       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0946b77ce3ad69881191818915917931771bf411364682b1118f0165eaaec77d] <==
	W0313 23:27:52.335098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0313 23:27:52.335147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0313 23:27:52.335156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0313 23:27:52.335162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0313 23:27:52.335503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:27:52.335543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:27:52.338691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:27:52.338732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:27:53.248040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:27:53.248093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:27:53.387035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:27:53.393442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:27:53.401073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0313 23:27:53.401281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0313 23:27:53.413708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0313 23:27:53.413862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0313 23:27:53.442627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0313 23:27:53.442684       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0313 23:27:53.598252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0313 23:27:53.598620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0313 23:27:53.651118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0313 23:27:53.651308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0313 23:27:53.697321       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0313 23:27:53.697369       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0313 23:27:55.722817       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 13 23:32:19 addons-524943 kubelet[1276]: I0313 23:32:19.991570    1276 memory_manager.go:346] "RemoveStaleState removing state" podUID="de22fba7-939f-4017-b5d7-93284a6052cf" containerName="csi-snapshotter"
	Mar 13 23:32:19 addons-524943 kubelet[1276]: I0313 23:32:19.991575    1276 memory_manager.go:346] "RemoveStaleState removing state" podUID="c50ce4a5-534b-4288-97a0-4b87e8f8c44e" containerName="csi-resizer"
	Mar 13 23:32:20 addons-524943 kubelet[1276]: I0313 23:32:20.021660    1276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83487913-3fa5-407a-99c1-1841716f5b3b-gcp-creds\") pod \"hello-world-app-5d77478584-lncdm\" (UID: \"83487913-3fa5-407a-99c1-1841716f5b3b\") " pod="default/hello-world-app-5d77478584-lncdm"
	Mar 13 23:32:20 addons-524943 kubelet[1276]: I0313 23:32:20.021828    1276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrnk\" (UniqueName: \"kubernetes.io/projected/83487913-3fa5-407a-99c1-1841716f5b3b-kube-api-access-5mrnk\") pod \"hello-world-app-5d77478584-lncdm\" (UID: \"83487913-3fa5-407a-99c1-1841716f5b3b\") " pod="default/hello-world-app-5d77478584-lncdm"
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.138370    1276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9w7\" (UniqueName: \"kubernetes.io/projected/d224570a-7241-4372-8b9b-1fd3309f4da1-kube-api-access-wk9w7\") pod \"d224570a-7241-4372-8b9b-1fd3309f4da1\" (UID: \"d224570a-7241-4372-8b9b-1fd3309f4da1\") "
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.140482    1276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d224570a-7241-4372-8b9b-1fd3309f4da1-kube-api-access-wk9w7" (OuterVolumeSpecName: "kube-api-access-wk9w7") pod "d224570a-7241-4372-8b9b-1fd3309f4da1" (UID: "d224570a-7241-4372-8b9b-1fd3309f4da1"). InnerVolumeSpecName "kube-api-access-wk9w7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.239151    1276 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wk9w7\" (UniqueName: \"kubernetes.io/projected/d224570a-7241-4372-8b9b-1fd3309f4da1-kube-api-access-wk9w7\") on node \"addons-524943\" DevicePath \"\""
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.696952    1276 scope.go:117] "RemoveContainer" containerID="848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92"
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.749455    1276 scope.go:117] "RemoveContainer" containerID="848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92"
	Mar 13 23:32:21 addons-524943 kubelet[1276]: E0313 23:32:21.750374    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92\": container with ID starting with 848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92 not found: ID does not exist" containerID="848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92"
	Mar 13 23:32:21 addons-524943 kubelet[1276]: I0313 23:32:21.750421    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92"} err="failed to get container status \"848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92\": rpc error: code = NotFound desc = could not find container \"848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92\": container with ID starting with 848b128ab3965bde5595bd0d61a348492fc66fd251670e84a6e8d32429892d92 not found: ID does not exist"
	Mar 13 23:32:23 addons-524943 kubelet[1276]: I0313 23:32:23.433430    1276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="145778a3-78a3-4092-97c4-64de0efc4f0c" path="/var/lib/kubelet/pods/145778a3-78a3-4092-97c4-64de0efc4f0c/volumes"
	Mar 13 23:32:23 addons-524943 kubelet[1276]: I0313 23:32:23.433863    1276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d224570a-7241-4372-8b9b-1fd3309f4da1" path="/var/lib/kubelet/pods/d224570a-7241-4372-8b9b-1fd3309f4da1/volumes"
	Mar 13 23:32:23 addons-524943 kubelet[1276]: I0313 23:32:23.434340    1276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d8ec0402-0928-44c5-a827-d96b3e00356f" path="/var/lib/kubelet/pods/d8ec0402-0928-44c5-a827-d96b3e00356f/volumes"
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.078516    1276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64fvp\" (UniqueName: \"kubernetes.io/projected/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-kube-api-access-64fvp\") pod \"efa12364-06b1-4fb0-b78d-c7e115e9b6a7\" (UID: \"efa12364-06b1-4fb0-b78d-c7e115e9b6a7\") "
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.078601    1276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-webhook-cert\") pod \"efa12364-06b1-4fb0-b78d-c7e115e9b6a7\" (UID: \"efa12364-06b1-4fb0-b78d-c7e115e9b6a7\") "
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.081393    1276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-kube-api-access-64fvp" (OuterVolumeSpecName: "kube-api-access-64fvp") pod "efa12364-06b1-4fb0-b78d-c7e115e9b6a7" (UID: "efa12364-06b1-4fb0-b78d-c7e115e9b6a7"). InnerVolumeSpecName "kube-api-access-64fvp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.081788    1276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "efa12364-06b1-4fb0-b78d-c7e115e9b6a7" (UID: "efa12364-06b1-4fb0-b78d-c7e115e9b6a7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.179298    1276 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-webhook-cert\") on node \"addons-524943\" DevicePath \"\""
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.179336    1276 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-64fvp\" (UniqueName: \"kubernetes.io/projected/efa12364-06b1-4fb0-b78d-c7e115e9b6a7-kube-api-access-64fvp\") on node \"addons-524943\" DevicePath \"\""
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.749636    1276 scope.go:117] "RemoveContainer" containerID="11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b"
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.769998    1276 scope.go:117] "RemoveContainer" containerID="11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b"
	Mar 13 23:32:26 addons-524943 kubelet[1276]: E0313 23:32:26.770697    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b\": container with ID starting with 11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b not found: ID does not exist" containerID="11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b"
	Mar 13 23:32:26 addons-524943 kubelet[1276]: I0313 23:32:26.770757    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b"} err="failed to get container status \"11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b\": rpc error: code = NotFound desc = could not find container \"11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b\": container with ID starting with 11c67f0ad70d941d4f793b4936fdf6aff7941543c62797d87ec933a56dfdb06b not found: ID does not exist"
	Mar 13 23:32:27 addons-524943 kubelet[1276]: I0313 23:32:27.435645    1276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="efa12364-06b1-4fb0-b78d-c7e115e9b6a7" path="/var/lib/kubelet/pods/efa12364-06b1-4fb0-b78d-c7e115e9b6a7/volumes"
	
	
	==> storage-provisioner [fcf69a8622652d555010ed2793ac758f521847bd2f651176b64721cd2b4ec327] <==
	I0313 23:28:17.653031       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0313 23:28:17.679274       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0313 23:28:17.679308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0313 23:28:17.730792       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0313 23:28:17.730989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-524943_17b9557f-3d87-468d-8ae6-970ac3ebec47!
	I0313 23:28:17.731923       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e202b19-e3aa-45c9-9b80-61a38360bc1b", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-524943_17b9557f-3d87-468d-8ae6-970ac3ebec47 became leader
	I0313 23:28:17.831733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-524943_17b9557f-3d87-468d-8ae6-970ac3ebec47!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-524943 -n addons-524943
helpers_test.go:261: (dbg) Run:  kubectl --context addons-524943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-524943
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-524943: exit status 82 (2m0.501437419s)

                                                
                                                
-- stdout --
	* Stopping node "addons-524943"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-524943" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-524943
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-524943: exit status 11 (21.695941496s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.37:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-524943" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-524943
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-524943: exit status 11 (6.143640881s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.37:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-524943" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-524943
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-524943: exit status 11 (6.143363411s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.37:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-524943" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh pgrep buildkitd: exit status 1 (242.51687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image build -t localhost/my-image:functional-112122 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image build -t localhost/my-image:functional-112122 testdata/build --alsologtostderr: (4.627946442s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-112122 image build -t localhost/my-image:functional-112122 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> afb1bfc7104
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-112122
--> 3a5f0d9de05
Successfully tagged localhost/my-image:functional-112122
3a5f0d9de052d144447b1e921364131396b52995699a527a5d43d34d3d118187
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-112122 image build -t localhost/my-image:functional-112122 testdata/build --alsologtostderr:
I0313 23:44:13.308156   21717 out.go:291] Setting OutFile to fd 1 ...
I0313 23:44:13.308295   21717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:13.308308   21717 out.go:304] Setting ErrFile to fd 2...
I0313 23:44:13.308314   21717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:13.308542   21717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
I0313 23:44:13.309123   21717 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:13.309692   21717 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:13.310056   21717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:13.310094   21717 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:13.328342   21717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
I0313 23:44:13.328780   21717 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:13.329412   21717 main.go:141] libmachine: Using API Version  1
I0313 23:44:13.329439   21717 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:13.329830   21717 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:13.330032   21717 main.go:141] libmachine: (functional-112122) Calling .GetState
I0313 23:44:13.332112   21717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:13.332176   21717 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:13.347325   21717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
I0313 23:44:13.347671   21717 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:13.348080   21717 main.go:141] libmachine: Using API Version  1
I0313 23:44:13.348099   21717 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:13.348430   21717 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:13.348627   21717 main.go:141] libmachine: (functional-112122) Calling .DriverName
I0313 23:44:13.348830   21717 ssh_runner.go:195] Run: systemctl --version
I0313 23:44:13.348864   21717 main.go:141] libmachine: (functional-112122) Calling .GetSSHHostname
I0313 23:44:13.351317   21717 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:13.351655   21717 main.go:141] libmachine: (functional-112122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:68:bc", ip: ""} in network mk-functional-112122: {Iface:virbr1 ExpiryTime:2024-03-14 00:36:37 +0000 UTC Type:0 Mac:52:54:00:f0:68:bc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-112122 Clientid:01:52:54:00:f0:68:bc}
I0313 23:44:13.351682   21717 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined IP address 192.168.39.224 and MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:13.351849   21717 main.go:141] libmachine: (functional-112122) Calling .GetSSHPort
I0313 23:44:13.352002   21717 main.go:141] libmachine: (functional-112122) Calling .GetSSHKeyPath
I0313 23:44:13.352149   21717 main.go:141] libmachine: (functional-112122) Calling .GetSSHUsername
I0313 23:44:13.352294   21717 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/functional-112122/id_rsa Username:docker}
I0313 23:44:13.436158   21717 build_images.go:161] Building image from path: /tmp/build.749725795.tar
I0313 23:44:13.436226   21717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0313 23:44:13.460420   21717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.749725795.tar
I0313 23:44:13.466828   21717 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.749725795.tar: stat -c "%s %y" /var/lib/minikube/build/build.749725795.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.749725795.tar': No such file or directory
I0313 23:44:13.466868   21717 ssh_runner.go:362] scp /tmp/build.749725795.tar --> /var/lib/minikube/build/build.749725795.tar (3072 bytes)
I0313 23:44:13.503458   21717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.749725795
I0313 23:44:13.516098   21717 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.749725795 -xf /var/lib/minikube/build/build.749725795.tar
I0313 23:44:13.527789   21717 crio.go:297] Building image: /var/lib/minikube/build/build.749725795
I0313 23:44:13.527870   21717 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-112122 /var/lib/minikube/build/build.749725795 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0313 23:44:17.802310   21717 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-112122 /var/lib/minikube/build/build.749725795 --cgroup-manager=cgroupfs: (4.274414162s)
I0313 23:44:17.802381   21717 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.749725795
I0313 23:44:17.830633   21717 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.749725795.tar
I0313 23:44:17.863507   21717 build_images.go:217] Built localhost/my-image:functional-112122 from /tmp/build.749725795.tar
I0313 23:44:17.863543   21717 build_images.go:133] succeeded building to: functional-112122
I0313 23:44:17.863549   21717 build_images.go:134] failed building to: 
I0313 23:44:17.863569   21717 main.go:141] libmachine: Making call to close driver server
I0313 23:44:17.863578   21717 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:17.863868   21717 main.go:141] libmachine: (functional-112122) DBG | Closing plugin on server side
I0313 23:44:17.863896   21717 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:17.863912   21717 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:17.863928   21717 main.go:141] libmachine: Making call to close driver server
I0313 23:44:17.863940   21717 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:17.864192   21717 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:17.864208   21717 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image ls: (2.440731417s)
functional_test.go:442: expected "localhost/my-image:functional-112122" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (7.31s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (142.15s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 node stop m02 -v=7 --alsologtostderr
E0313 23:51:20.180009   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.493068846s)

                                                
                                                
-- stdout --
	* Stopping node "ha-504633-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:51:06.539500   26435 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:51:06.539859   26435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:51:06.539873   26435 out.go:304] Setting ErrFile to fd 2...
	I0313 23:51:06.539880   26435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:51:06.540084   26435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:51:06.540344   26435 mustload.go:65] Loading cluster: ha-504633
	I0313 23:51:06.540797   26435 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:51:06.540816   26435 stop.go:39] StopHost: ha-504633-m02
	I0313 23:51:06.541228   26435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.541265   26435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.557775   26435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34111
	I0313 23:51:06.558342   26435 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.558960   26435 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.558979   26435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.559382   26435 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.562277   26435 out.go:177] * Stopping node "ha-504633-m02"  ...
	I0313 23:51:06.564054   26435 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0313 23:51:06.564111   26435 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:51:06.564373   26435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0313 23:51:06.564398   26435 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:51:06.567623   26435 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:51:06.568066   26435 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:51:06.568104   26435 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:51:06.568364   26435 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:51:06.568594   26435 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:51:06.568793   26435 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:51:06.568970   26435 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:51:06.655544   26435 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0313 23:51:06.710420   26435 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0313 23:51:06.766390   26435 main.go:141] libmachine: Stopping "ha-504633-m02"...
	I0313 23:51:06.766434   26435 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:51:06.767919   26435 main.go:141] libmachine: (ha-504633-m02) Calling .Stop
	I0313 23:51:06.771813   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 0/120
	I0313 23:51:07.773490   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 1/120
	I0313 23:51:08.775004   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 2/120
	I0313 23:51:09.776698   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 3/120
	I0313 23:51:10.778123   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 4/120
	I0313 23:51:11.779790   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 5/120
	I0313 23:51:12.781723   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 6/120
	I0313 23:51:13.783918   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 7/120
	I0313 23:51:14.785269   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 8/120
	I0313 23:51:15.787100   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 9/120
	I0313 23:51:16.789792   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 10/120
	I0313 23:51:17.792175   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 11/120
	I0313 23:51:18.793629   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 12/120
	I0313 23:51:19.795325   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 13/120
	I0313 23:51:20.797265   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 14/120
	I0313 23:51:21.799283   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 15/120
	I0313 23:51:22.801610   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 16/120
	I0313 23:51:23.803883   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 17/120
	I0313 23:51:24.805430   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 18/120
	I0313 23:51:25.807467   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 19/120
	I0313 23:51:26.809508   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 20/120
	I0313 23:51:27.810759   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 21/120
	I0313 23:51:28.812419   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 22/120
	I0313 23:51:29.813898   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 23/120
	I0313 23:51:30.815336   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 24/120
	I0313 23:51:31.816721   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 25/120
	I0313 23:51:32.818254   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 26/120
	I0313 23:51:33.820309   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 27/120
	I0313 23:51:34.822059   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 28/120
	I0313 23:51:35.823533   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 29/120
	I0313 23:51:36.825907   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 30/120
	I0313 23:51:37.827558   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 31/120
	I0313 23:51:38.829076   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 32/120
	I0313 23:51:39.830603   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 33/120
	I0313 23:51:40.832210   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 34/120
	I0313 23:51:41.834212   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 35/120
	I0313 23:51:42.835524   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 36/120
	I0313 23:51:43.837414   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 37/120
	I0313 23:51:44.839024   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 38/120
	I0313 23:51:45.841222   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 39/120
	I0313 23:51:46.843184   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 40/120
	I0313 23:51:47.844584   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 41/120
	I0313 23:51:48.846199   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 42/120
	I0313 23:51:49.847498   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 43/120
	I0313 23:51:50.849188   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 44/120
	I0313 23:51:51.850952   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 45/120
	I0313 23:51:52.853312   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 46/120
	I0313 23:51:53.854675   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 47/120
	I0313 23:51:54.855952   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 48/120
	I0313 23:51:55.857149   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 49/120
	I0313 23:51:56.859310   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 50/120
	I0313 23:51:57.861135   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 51/120
	I0313 23:51:58.862825   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 52/120
	I0313 23:51:59.864129   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 53/120
	I0313 23:52:00.865509   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 54/120
	I0313 23:52:01.867621   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 55/120
	I0313 23:52:02.869339   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 56/120
	I0313 23:52:03.870805   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 57/120
	I0313 23:52:04.872036   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 58/120
	I0313 23:52:05.873392   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 59/120
	I0313 23:52:06.874663   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 60/120
	I0313 23:52:07.876044   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 61/120
	I0313 23:52:08.877342   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 62/120
	I0313 23:52:09.878872   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 63/120
	I0313 23:52:10.880233   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 64/120
	I0313 23:52:11.882177   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 65/120
	I0313 23:52:12.883683   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 66/120
	I0313 23:52:13.885709   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 67/120
	I0313 23:52:14.887457   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 68/120
	I0313 23:52:15.889047   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 69/120
	I0313 23:52:16.891482   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 70/120
	I0313 23:52:17.892856   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 71/120
	I0313 23:52:18.894170   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 72/120
	I0313 23:52:19.895595   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 73/120
	I0313 23:52:20.897016   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 74/120
	I0313 23:52:21.899121   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 75/120
	I0313 23:52:22.901179   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 76/120
	I0313 23:52:23.902583   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 77/120
	I0313 23:52:24.904221   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 78/120
	I0313 23:52:25.905645   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 79/120
	I0313 23:52:26.907552   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 80/120
	I0313 23:52:27.908914   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 81/120
	I0313 23:52:28.910210   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 82/120
	I0313 23:52:29.911566   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 83/120
	I0313 23:52:30.913219   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 84/120
	I0313 23:52:31.915172   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 85/120
	I0313 23:52:32.916409   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 86/120
	I0313 23:52:33.917701   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 87/120
	I0313 23:52:34.919135   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 88/120
	I0313 23:52:35.921364   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 89/120
	I0313 23:52:36.923351   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 90/120
	I0313 23:52:37.924800   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 91/120
	I0313 23:52:38.926305   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 92/120
	I0313 23:52:39.927791   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 93/120
	I0313 23:52:40.929421   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 94/120
	I0313 23:52:41.931347   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 95/120
	I0313 23:52:42.932900   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 96/120
	I0313 23:52:43.934211   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 97/120
	I0313 23:52:44.935897   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 98/120
	I0313 23:52:45.937555   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 99/120
	I0313 23:52:46.940185   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 100/120
	I0313 23:52:47.941515   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 101/120
	I0313 23:52:48.943157   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 102/120
	I0313 23:52:49.944577   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 103/120
	I0313 23:52:50.946408   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 104/120
	I0313 23:52:51.947807   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 105/120
	I0313 23:52:52.949275   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 106/120
	I0313 23:52:53.951050   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 107/120
	I0313 23:52:54.953334   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 108/120
	I0313 23:52:55.954759   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 109/120
	I0313 23:52:56.956943   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 110/120
	I0313 23:52:57.958417   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 111/120
	I0313 23:52:58.960017   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 112/120
	I0313 23:52:59.962439   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 113/120
	I0313 23:53:00.964705   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 114/120
	I0313 23:53:01.966151   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 115/120
	I0313 23:53:02.967446   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 116/120
	I0313 23:53:03.969296   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 117/120
	I0313 23:53:04.971479   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 118/120
	I0313 23:53:05.972862   26435 main.go:141] libmachine: (ha-504633-m02) Waiting for machine to stop 119/120
	I0313 23:53:06.973964   26435 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0313 23:53:06.974089   26435 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-504633 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (19.184715557s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:07.031451   26767 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:07.031697   26767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:07.031707   26767 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:07.031711   26767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:07.031873   26767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:07.032032   26767 out.go:298] Setting JSON to false
	I0313 23:53:07.032056   26767 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:07.032117   26767 notify.go:220] Checking for updates...
	I0313 23:53:07.032468   26767 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:07.032486   26767 status.go:255] checking status of ha-504633 ...
	I0313 23:53:07.032917   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.032978   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.053398   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0313 23:53:07.053810   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.054512   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.054534   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.054980   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.055213   26767 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:07.057156   26767 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:07.057173   26767 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:07.057472   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.057510   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.071949   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0313 23:53:07.072422   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.072894   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.072914   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.073210   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.073414   26767 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:07.076476   26767 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:07.077137   26767 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:07.077163   26767 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:07.077361   26767 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:07.077677   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.077712   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.092692   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0313 23:53:07.093097   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.093564   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.093580   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.093935   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.094160   26767 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:07.094373   26767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:07.094396   26767 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:07.097284   26767 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:07.097733   26767 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:07.097762   26767 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:07.097927   26767 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:07.098093   26767 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:07.098213   26767 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:07.098342   26767 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:07.189237   26767 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:07.197131   26767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:07.215207   26767 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:07.215235   26767 api_server.go:166] Checking apiserver status ...
	I0313 23:53:07.215276   26767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:07.231550   26767 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:07.242629   26767 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:07.242679   26767 ssh_runner.go:195] Run: ls
	I0313 23:53:07.247748   26767 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:07.254494   26767 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:07.254523   26767 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:07.254533   26767 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:07.254549   26767 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:07.254990   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.255037   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.270629   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
	I0313 23:53:07.271015   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.271452   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.271477   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.271838   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.272019   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:07.274164   26767 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:07.274180   26767 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:07.274524   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.274615   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.290797   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0313 23:53:07.291344   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.291960   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.291983   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.292344   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.292534   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:07.295856   26767 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:07.296357   26767 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:07.296388   26767 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:07.296538   26767 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:07.296978   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:07.297051   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:07.311930   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I0313 23:53:07.312375   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:07.312931   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:07.312957   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:07.313299   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:07.313500   26767 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:07.313733   26767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:07.313766   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:07.316679   26767 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:07.317139   26767 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:07.317165   26767 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:07.317365   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:07.317523   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:07.317755   26767 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:07.317908   26767 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:25.786971   26767 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:25.787071   26767 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:25.787085   26767 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:25.787093   26767 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:25.787127   26767 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:25.787139   26767 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:25.787546   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:25.787616   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:25.802132   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0313 23:53:25.802560   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:25.803100   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:25.803128   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:25.803476   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:25.803674   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:25.805374   26767 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:25.805391   26767 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:25.805718   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:25.805780   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:25.821288   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0313 23:53:25.821723   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:25.822200   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:25.822228   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:25.822549   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:25.822800   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:25.826322   26767 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:25.826816   26767 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:25.826847   26767 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:25.827084   26767 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:25.827393   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:25.827437   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:25.842390   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0313 23:53:25.842943   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:25.843452   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:25.843470   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:25.843861   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:25.844143   26767 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:25.844363   26767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:25.844382   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:25.847482   26767 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:25.848013   26767 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:25.848034   26767 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:25.848128   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:25.848312   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:25.848468   26767 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:25.848642   26767 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:25.933133   26767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:25.951744   26767 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:25.951771   26767 api_server.go:166] Checking apiserver status ...
	I0313 23:53:25.951803   26767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:25.966955   26767 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:25.977251   26767 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:25.977324   26767 ssh_runner.go:195] Run: ls
	I0313 23:53:25.982715   26767 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:25.987470   26767 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:25.987488   26767 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:25.987496   26767 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:25.987509   26767 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:25.987783   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:25.987814   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:26.002140   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0313 23:53:26.002537   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:26.003033   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:26.003057   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:26.003390   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:26.003566   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:26.005338   26767 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:26.005357   26767 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:26.005630   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:26.005676   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:26.021028   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0313 23:53:26.021455   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:26.021971   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:26.021993   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:26.022320   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:26.022510   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:26.024979   26767 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:26.025333   26767 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:26.025367   26767 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:26.025504   26767 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:26.025807   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:26.025850   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:26.041202   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0313 23:53:26.041659   26767 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:26.042074   26767 main.go:141] libmachine: Using API Version  1
	I0313 23:53:26.042098   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:26.042385   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:26.042560   26767 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:26.042739   26767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:26.042757   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:26.045445   26767 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:26.045848   26767 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:26.045873   26767 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:26.046013   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:26.046155   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:26.046318   26767 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:26.046439   26767 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:26.140114   26767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:26.158245   26767 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 logs -n 25: (1.541156394s)
helpers_test.go:252: TestMutliControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m03_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m04 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp testdata/cp-test.txt                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m03 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-504633 node stop m02 -v=7                                                     | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:44:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:44:32.125716   22414 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:44:32.125833   22414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:32.125839   22414 out.go:304] Setting ErrFile to fd 2...
	I0313 23:44:32.125843   22414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:32.126008   22414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:44:32.126601   22414 out.go:298] Setting JSON to false
	I0313 23:44:32.127455   22414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1615,"bootTime":1710371857,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:44:32.127515   22414 start.go:139] virtualization: kvm guest
	I0313 23:44:32.129842   22414 out.go:177] * [ha-504633] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:44:32.131786   22414 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:44:32.131832   22414 notify.go:220] Checking for updates...
	I0313 23:44:32.134799   22414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:44:32.136125   22414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:44:32.137286   22414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.138690   22414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:44:32.140047   22414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:44:32.141601   22414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:44:32.176193   22414 out.go:177] * Using the kvm2 driver based on user configuration
	I0313 23:44:32.177334   22414 start.go:297] selected driver: kvm2
	I0313 23:44:32.177345   22414 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:44:32.177355   22414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:44:32.178044   22414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:44:32.178113   22414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:44:32.192528   22414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:44:32.192572   22414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:44:32.192767   22414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:44:32.192791   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:44:32.192797   22414 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0313 23:44:32.192805   22414 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0313 23:44:32.192864   22414 start.go:340] cluster config:
	{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0313 23:44:32.192964   22414 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:44:32.194590   22414 out.go:177] * Starting "ha-504633" primary control-plane node in "ha-504633" cluster
	I0313 23:44:32.195784   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:44:32.195820   22414 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:44:32.195829   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:44:32.195907   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:44:32.195918   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:44:32.196194   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:44:32.196212   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json: {Name:mk320919ac7140aab6984d0075187e5388514b68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:44:32.196336   22414 start.go:360] acquireMachinesLock for ha-504633: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:44:32.196362   22414 start.go:364] duration metric: took 14.269µs to acquireMachinesLock for "ha-504633"
	I0313 23:44:32.196375   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:44:32.196424   22414 start.go:125] createHost starting for "" (driver="kvm2")
	I0313 23:44:32.198067   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:44:32.198188   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:32.198234   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:32.212049   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0313 23:44:32.212441   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:32.213011   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:44:32.213036   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:32.213349   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:32.213562   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:32.213737   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:32.213863   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:44:32.213890   22414 client.go:168] LocalClient.Create starting
	I0313 23:44:32.213924   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:44:32.213961   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:44:32.213978   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:44:32.214031   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:44:32.214049   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:44:32.214063   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:44:32.214076   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:44:32.214087   22414 main.go:141] libmachine: (ha-504633) Calling .PreCreateCheck
	I0313 23:44:32.214377   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:32.214733   22414 main.go:141] libmachine: Creating machine...
	I0313 23:44:32.214752   22414 main.go:141] libmachine: (ha-504633) Calling .Create
	I0313 23:44:32.214892   22414 main.go:141] libmachine: (ha-504633) Creating KVM machine...
	I0313 23:44:32.216190   22414 main.go:141] libmachine: (ha-504633) DBG | found existing default KVM network
	I0313 23:44:32.216832   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.216715   22437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0313 23:44:32.216874   22414 main.go:141] libmachine: (ha-504633) DBG | created network xml: 
	I0313 23:44:32.216898   22414 main.go:141] libmachine: (ha-504633) DBG | <network>
	I0313 23:44:32.216923   22414 main.go:141] libmachine: (ha-504633) DBG |   <name>mk-ha-504633</name>
	I0313 23:44:32.216943   22414 main.go:141] libmachine: (ha-504633) DBG |   <dns enable='no'/>
	I0313 23:44:32.216955   22414 main.go:141] libmachine: (ha-504633) DBG |   
	I0313 23:44:32.216969   22414 main.go:141] libmachine: (ha-504633) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0313 23:44:32.216981   22414 main.go:141] libmachine: (ha-504633) DBG |     <dhcp>
	I0313 23:44:32.216991   22414 main.go:141] libmachine: (ha-504633) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0313 23:44:32.217004   22414 main.go:141] libmachine: (ha-504633) DBG |     </dhcp>
	I0313 23:44:32.217013   22414 main.go:141] libmachine: (ha-504633) DBG |   </ip>
	I0313 23:44:32.217025   22414 main.go:141] libmachine: (ha-504633) DBG |   
	I0313 23:44:32.217035   22414 main.go:141] libmachine: (ha-504633) DBG | </network>
	I0313 23:44:32.217046   22414 main.go:141] libmachine: (ha-504633) DBG | 
	I0313 23:44:32.221854   22414 main.go:141] libmachine: (ha-504633) DBG | trying to create private KVM network mk-ha-504633 192.168.39.0/24...
	I0313 23:44:32.289918   22414 main.go:141] libmachine: (ha-504633) DBG | private KVM network mk-ha-504633 192.168.39.0/24 created
	I0313 23:44:32.289947   22414 main.go:141] libmachine: (ha-504633) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 ...
	I0313 23:44:32.289992   22414 main.go:141] libmachine: (ha-504633) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:44:32.290024   22414 main.go:141] libmachine: (ha-504633) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:44:32.290045   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.289899   22437 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.512558   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.512388   22437 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa...
	I0313 23:44:32.585720   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.585595   22437 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/ha-504633.rawdisk...
	I0313 23:44:32.585766   22414 main.go:141] libmachine: (ha-504633) DBG | Writing magic tar header
	I0313 23:44:32.585776   22414 main.go:141] libmachine: (ha-504633) DBG | Writing SSH key tar header
	I0313 23:44:32.585789   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.585701   22437 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 ...
	I0313 23:44:32.585807   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633
	I0313 23:44:32.585877   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:44:32.585911   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.585924   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 (perms=drwx------)
	I0313 23:44:32.585939   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:44:32.585954   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:44:32.585965   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:44:32.585981   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:44:32.585991   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:44:32.585998   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home
	I0313 23:44:32.586011   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:44:32.586017   22414 main.go:141] libmachine: (ha-504633) DBG | Skipping /home - not owner
	I0313 23:44:32.586040   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:44:32.586063   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:44:32.586075   22414 main.go:141] libmachine: (ha-504633) Creating domain...
	I0313 23:44:32.587118   22414 main.go:141] libmachine: (ha-504633) define libvirt domain using xml: 
	I0313 23:44:32.587140   22414 main.go:141] libmachine: (ha-504633) <domain type='kvm'>
	I0313 23:44:32.587152   22414 main.go:141] libmachine: (ha-504633)   <name>ha-504633</name>
	I0313 23:44:32.587157   22414 main.go:141] libmachine: (ha-504633)   <memory unit='MiB'>2200</memory>
	I0313 23:44:32.587162   22414 main.go:141] libmachine: (ha-504633)   <vcpu>2</vcpu>
	I0313 23:44:32.587166   22414 main.go:141] libmachine: (ha-504633)   <features>
	I0313 23:44:32.587171   22414 main.go:141] libmachine: (ha-504633)     <acpi/>
	I0313 23:44:32.587175   22414 main.go:141] libmachine: (ha-504633)     <apic/>
	I0313 23:44:32.587180   22414 main.go:141] libmachine: (ha-504633)     <pae/>
	I0313 23:44:32.587184   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587189   22414 main.go:141] libmachine: (ha-504633)   </features>
	I0313 23:44:32.587194   22414 main.go:141] libmachine: (ha-504633)   <cpu mode='host-passthrough'>
	I0313 23:44:32.587199   22414 main.go:141] libmachine: (ha-504633)   
	I0313 23:44:32.587205   22414 main.go:141] libmachine: (ha-504633)   </cpu>
	I0313 23:44:32.587210   22414 main.go:141] libmachine: (ha-504633)   <os>
	I0313 23:44:32.587218   22414 main.go:141] libmachine: (ha-504633)     <type>hvm</type>
	I0313 23:44:32.587223   22414 main.go:141] libmachine: (ha-504633)     <boot dev='cdrom'/>
	I0313 23:44:32.587227   22414 main.go:141] libmachine: (ha-504633)     <boot dev='hd'/>
	I0313 23:44:32.587233   22414 main.go:141] libmachine: (ha-504633)     <bootmenu enable='no'/>
	I0313 23:44:32.587238   22414 main.go:141] libmachine: (ha-504633)   </os>
	I0313 23:44:32.587254   22414 main.go:141] libmachine: (ha-504633)   <devices>
	I0313 23:44:32.587270   22414 main.go:141] libmachine: (ha-504633)     <disk type='file' device='cdrom'>
	I0313 23:44:32.587278   22414 main.go:141] libmachine: (ha-504633)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/boot2docker.iso'/>
	I0313 23:44:32.587282   22414 main.go:141] libmachine: (ha-504633)       <target dev='hdc' bus='scsi'/>
	I0313 23:44:32.587287   22414 main.go:141] libmachine: (ha-504633)       <readonly/>
	I0313 23:44:32.587291   22414 main.go:141] libmachine: (ha-504633)     </disk>
	I0313 23:44:32.587296   22414 main.go:141] libmachine: (ha-504633)     <disk type='file' device='disk'>
	I0313 23:44:32.587305   22414 main.go:141] libmachine: (ha-504633)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:44:32.587315   22414 main.go:141] libmachine: (ha-504633)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/ha-504633.rawdisk'/>
	I0313 23:44:32.587322   22414 main.go:141] libmachine: (ha-504633)       <target dev='hda' bus='virtio'/>
	I0313 23:44:32.587326   22414 main.go:141] libmachine: (ha-504633)     </disk>
	I0313 23:44:32.587332   22414 main.go:141] libmachine: (ha-504633)     <interface type='network'>
	I0313 23:44:32.587337   22414 main.go:141] libmachine: (ha-504633)       <source network='mk-ha-504633'/>
	I0313 23:44:32.587342   22414 main.go:141] libmachine: (ha-504633)       <model type='virtio'/>
	I0313 23:44:32.587372   22414 main.go:141] libmachine: (ha-504633)     </interface>
	I0313 23:44:32.587398   22414 main.go:141] libmachine: (ha-504633)     <interface type='network'>
	I0313 23:44:32.587410   22414 main.go:141] libmachine: (ha-504633)       <source network='default'/>
	I0313 23:44:32.587422   22414 main.go:141] libmachine: (ha-504633)       <model type='virtio'/>
	I0313 23:44:32.587432   22414 main.go:141] libmachine: (ha-504633)     </interface>
	I0313 23:44:32.587443   22414 main.go:141] libmachine: (ha-504633)     <serial type='pty'>
	I0313 23:44:32.587457   22414 main.go:141] libmachine: (ha-504633)       <target port='0'/>
	I0313 23:44:32.587467   22414 main.go:141] libmachine: (ha-504633)     </serial>
	I0313 23:44:32.587494   22414 main.go:141] libmachine: (ha-504633)     <console type='pty'>
	I0313 23:44:32.587521   22414 main.go:141] libmachine: (ha-504633)       <target type='serial' port='0'/>
	I0313 23:44:32.587540   22414 main.go:141] libmachine: (ha-504633)     </console>
	I0313 23:44:32.587551   22414 main.go:141] libmachine: (ha-504633)     <rng model='virtio'>
	I0313 23:44:32.587563   22414 main.go:141] libmachine: (ha-504633)       <backend model='random'>/dev/random</backend>
	I0313 23:44:32.587574   22414 main.go:141] libmachine: (ha-504633)     </rng>
	I0313 23:44:32.587584   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587594   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587615   22414 main.go:141] libmachine: (ha-504633)   </devices>
	I0313 23:44:32.587632   22414 main.go:141] libmachine: (ha-504633) </domain>
	I0313 23:44:32.587642   22414 main.go:141] libmachine: (ha-504633) 
	I0313 23:44:32.591667   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:2d:9d:87 in network default
	I0313 23:44:32.592245   22414 main.go:141] libmachine: (ha-504633) Ensuring networks are active...
	I0313 23:44:32.592267   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:32.592995   22414 main.go:141] libmachine: (ha-504633) Ensuring network default is active
	I0313 23:44:32.593264   22414 main.go:141] libmachine: (ha-504633) Ensuring network mk-ha-504633 is active
	I0313 23:44:32.593831   22414 main.go:141] libmachine: (ha-504633) Getting domain xml...
	I0313 23:44:32.594434   22414 main.go:141] libmachine: (ha-504633) Creating domain...
	I0313 23:44:33.778039   22414 main.go:141] libmachine: (ha-504633) Waiting to get IP...
	I0313 23:44:33.778816   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:33.779142   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:33.779172   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:33.779123   22437 retry.go:31] will retry after 306.290275ms: waiting for machine to come up
	I0313 23:44:34.086721   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.087139   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.087180   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.087115   22437 retry.go:31] will retry after 343.376293ms: waiting for machine to come up
	I0313 23:44:34.431840   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.432327   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.432349   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.432275   22437 retry.go:31] will retry after 379.783985ms: waiting for machine to come up
	I0313 23:44:34.813983   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.814535   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.814575   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.814467   22437 retry.go:31] will retry after 541.31159ms: waiting for machine to come up
	I0313 23:44:35.357035   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:35.357545   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:35.357572   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:35.357504   22437 retry.go:31] will retry after 659.350133ms: waiting for machine to come up
	I0313 23:44:36.018159   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:36.018542   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:36.018557   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:36.018509   22437 retry.go:31] will retry after 654.425245ms: waiting for machine to come up
	I0313 23:44:36.674443   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:36.674941   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:36.674974   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:36.674916   22437 retry.go:31] will retry after 956.937793ms: waiting for machine to come up
	I0313 23:44:37.634017   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:37.634591   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:37.634613   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:37.634541   22437 retry.go:31] will retry after 966.617352ms: waiting for machine to come up
	I0313 23:44:38.602723   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:38.603199   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:38.603230   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:38.603140   22437 retry.go:31] will retry after 1.15163624s: waiting for machine to come up
	I0313 23:44:39.756107   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:39.756522   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:39.756558   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:39.756485   22437 retry.go:31] will retry after 2.030299917s: waiting for machine to come up
	I0313 23:44:41.789690   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:41.790051   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:41.790081   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:41.790003   22437 retry.go:31] will retry after 2.380119341s: waiting for machine to come up
	I0313 23:44:44.171371   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:44.171805   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:44.171843   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:44.171725   22437 retry.go:31] will retry after 3.5769802s: waiting for machine to come up
	I0313 23:44:47.749986   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:47.750442   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:47.750464   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:47.750386   22437 retry.go:31] will retry after 4.213108212s: waiting for machine to come up
	I0313 23:44:51.968766   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:51.969192   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:51.969213   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:51.969152   22437 retry.go:31] will retry after 3.948908595s: waiting for machine to come up
	I0313 23:44:55.919719   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.920146   22414 main.go:141] libmachine: (ha-504633) Found IP for machine: 192.168.39.31
	I0313 23:44:55.920185   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has current primary IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.920198   22414 main.go:141] libmachine: (ha-504633) Reserving static IP address...
	I0313 23:44:55.920529   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find host DHCP lease matching {name: "ha-504633", mac: "52:54:00:ad:1c:0e", ip: "192.168.39.31"} in network mk-ha-504633
	I0313 23:44:55.992191   22414 main.go:141] libmachine: (ha-504633) DBG | Getting to WaitForSSH function...
	I0313 23:44:55.992224   22414 main.go:141] libmachine: (ha-504633) Reserved static IP address: 192.168.39.31
	I0313 23:44:55.992238   22414 main.go:141] libmachine: (ha-504633) Waiting for SSH to be available...
	I0313 23:44:55.995144   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.995518   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:55.995546   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.995692   22414 main.go:141] libmachine: (ha-504633) DBG | Using SSH client type: external
	I0313 23:44:55.995719   22414 main.go:141] libmachine: (ha-504633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa (-rw-------)
	I0313 23:44:55.995759   22414 main.go:141] libmachine: (ha-504633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:44:55.995773   22414 main.go:141] libmachine: (ha-504633) DBG | About to run SSH command:
	I0313 23:44:55.995803   22414 main.go:141] libmachine: (ha-504633) DBG | exit 0
	I0313 23:44:56.123167   22414 main.go:141] libmachine: (ha-504633) DBG | SSH cmd err, output: <nil>: 
	I0313 23:44:56.123449   22414 main.go:141] libmachine: (ha-504633) KVM machine creation complete!
	I0313 23:44:56.123731   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:56.124220   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:56.124427   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:56.124603   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:44:56.124618   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:44:56.125979   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:44:56.125995   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:44:56.126004   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:44:56.126013   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.128796   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.129264   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.129302   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.129431   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.129603   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.129753   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.129919   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.130063   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.130340   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.130353   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:44:56.242439   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:44:56.242468   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:44:56.242478   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.246986   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.247423   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.247465   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.247630   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.247840   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.248012   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.248172   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.248340   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.248489   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.248505   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:44:56.360116   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:44:56.360192   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:44:56.360203   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:44:56.360213   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.360482   22414 buildroot.go:166] provisioning hostname "ha-504633"
	I0313 23:44:56.360504   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.360700   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.363857   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.364223   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.364253   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.364329   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.364513   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.364706   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.364861   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.365034   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.365209   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.365223   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633 && echo "ha-504633" | sudo tee /etc/hostname
	I0313 23:44:56.493407   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:44:56.493435   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.496404   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.496821   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.496850   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.497015   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.497217   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.497344   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.497450   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.497609   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.497770   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.497790   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:44:56.620475   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:44:56.620502   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:44:56.620552   22414 buildroot.go:174] setting up certificates
	I0313 23:44:56.620563   22414 provision.go:84] configureAuth start
	I0313 23:44:56.620572   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.620885   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:56.623726   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.624098   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.624119   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.624330   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.626384   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.626663   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.626688   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.626833   22414 provision.go:143] copyHostCerts
	I0313 23:44:56.626865   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:44:56.626904   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:44:56.626915   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:44:56.626980   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:44:56.627074   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:44:56.627093   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:44:56.627097   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:44:56.627119   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:44:56.627170   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:44:56.627188   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:44:56.627194   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:44:56.627219   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:44:56.627274   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633 san=[127.0.0.1 192.168.39.31 ha-504633 localhost minikube]
	I0313 23:44:56.742896   22414 provision.go:177] copyRemoteCerts
	I0313 23:44:56.742947   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:44:56.742969   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.745562   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.745869   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.745899   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.746104   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.746279   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.746469   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.746588   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:56.833348   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:44:56.833410   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:44:56.859577   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:44:56.859643   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0313 23:44:56.884457   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:44:56.884525   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:44:56.908850   22414 provision.go:87] duration metric: took 288.275233ms to configureAuth
	I0313 23:44:56.908877   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:44:56.909026   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:44:56.909099   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.911808   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.912157   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.912184   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.912367   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.912551   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.912698   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.912850   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.913014   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.913188   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.913209   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:44:57.196917   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:44:57.196947   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:44:57.196957   22414 main.go:141] libmachine: (ha-504633) Calling .GetURL
	I0313 23:44:57.198383   22414 main.go:141] libmachine: (ha-504633) DBG | Using libvirt version 6000000
	I0313 23:44:57.200945   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.201281   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.201301   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.201550   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:44:57.201564   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:44:57.201571   22414 client.go:171] duration metric: took 24.987671205s to LocalClient.Create
	I0313 23:44:57.201593   22414 start.go:167] duration metric: took 24.987729845s to libmachine.API.Create "ha-504633"
	I0313 23:44:57.201601   22414 start.go:293] postStartSetup for "ha-504633" (driver="kvm2")
	I0313 23:44:57.201612   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:44:57.201628   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.201841   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:44:57.201862   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.204145   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.204499   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.204528   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.204618   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.204794   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.204949   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.205072   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.293878   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:44:57.298485   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:44:57.298510   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:44:57.298589   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:44:57.298679   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:44:57.298691   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:44:57.298817   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:44:57.308679   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:44:57.334334   22414 start.go:296] duration metric: took 132.719551ms for postStartSetup
	I0313 23:44:57.334387   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:57.335039   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:57.337483   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.337819   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.337873   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.338027   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:44:57.338216   22414 start.go:128] duration metric: took 25.141782705s to createHost
	I0313 23:44:57.338241   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.340536   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.340844   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.340881   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.340954   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.341172   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.341329   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.341514   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.341733   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:57.341876   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:57.341889   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:44:57.455835   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373497.428421798
	
	I0313 23:44:57.455856   22414 fix.go:216] guest clock: 1710373497.428421798
	I0313 23:44:57.455864   22414 fix.go:229] Guest: 2024-03-13 23:44:57.428421798 +0000 UTC Remote: 2024-03-13 23:44:57.338229619 +0000 UTC m=+25.260713200 (delta=90.192179ms)
	I0313 23:44:57.455904   22414 fix.go:200] guest clock delta is within tolerance: 90.192179ms
	I0313 23:44:57.455912   22414 start.go:83] releasing machines lock for "ha-504633", held for 25.259544059s
	I0313 23:44:57.455929   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.456222   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:57.458828   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.459263   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.459289   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.459431   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.459910   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.460077   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.460158   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:44:57.460208   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.460290   22414 ssh_runner.go:195] Run: cat /version.json
	I0313 23:44:57.460311   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.462602   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.462967   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463007   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.463031   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463152   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.463331   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.463522   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.463550   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463568   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.463617   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.463713   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.463796   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.463930   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.464096   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.543720   22414 ssh_runner.go:195] Run: systemctl --version
	I0313 23:44:57.580818   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:44:57.742779   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:44:57.749815   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:44:57.749880   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:44:57.766967   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:44:57.766986   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:44:57.767040   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:44:57.783463   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:44:57.797445   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:44:57.797510   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:44:57.811066   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:44:57.825269   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:44:57.945932   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:44:58.085895   22414 docker.go:233] disabling docker service ...
	I0313 23:44:58.085969   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:44:58.101314   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:44:58.114944   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:44:58.261766   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:44:58.377397   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:44:58.393010   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:44:58.413240   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:44:58.413296   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.424471   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:44:58.424525   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.435516   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.446008   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.456482   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:44:58.467471   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:44:58.477162   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:44:58.477208   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:44:58.490571   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:44:58.500186   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:44:58.616180   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:44:58.757118   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:44:58.757183   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:44:58.761792   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:44:58.761845   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:44:58.765720   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:44:58.804603   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:44:58.804690   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:44:58.832457   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:44:58.864814   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:44:58.866087   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:58.868642   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:58.868918   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:58.868947   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:58.869232   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:44:58.873403   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:44:58.886969   22414 kubeadm.go:877] updating cluster {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:44:58.887067   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:44:58.887122   22414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:44:58.922426   22414 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0313 23:44:58.922517   22414 ssh_runner.go:195] Run: which lz4
	I0313 23:44:58.927152   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0313 23:44:58.927265   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0313 23:44:58.931694   22414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0313 23:44:58.931737   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0313 23:45:00.592949   22414 crio.go:444] duration metric: took 1.665723837s to copy over tarball
	I0313 23:45:00.593015   22414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0313 23:45:02.970524   22414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.377473368s)
	I0313 23:45:02.970552   22414 crio.go:451] duration metric: took 2.377583062s to extract the tarball
	I0313 23:45:02.970559   22414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0313 23:45:03.017247   22414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:45:03.066718   22414 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:45:03.066737   22414 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:45:03.066745   22414 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0313 23:45:03.066863   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:45:03.066925   22414 ssh_runner.go:195] Run: crio config
	I0313 23:45:03.114121   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:45:03.114142   22414 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0313 23:45:03.114151   22414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:45:03.114175   22414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-504633 NodeName:ha-504633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:45:03.114321   22414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-504633"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:45:03.114344   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:45:03.114408   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:45:03.114463   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:03.125636   22414 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:45:03.125707   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0313 23:45:03.136412   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0313 23:45:03.154601   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:45:03.173826   22414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0313 23:45:03.193412   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:45:03.212431   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:45:03.216456   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:45:03.231229   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:45:03.372140   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:45:03.389338   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.31
	I0313 23:45:03.389366   22414 certs.go:194] generating shared ca certs ...
	I0313 23:45:03.389389   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.389555   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:45:03.389599   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:45:03.389608   22414 certs.go:256] generating profile certs ...
	I0313 23:45:03.389654   22414 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:45:03.389667   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt with IP's: []
	I0313 23:45:03.523525   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt ...
	I0313 23:45:03.523552   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt: {Name:mk22bec89923e7024371764bd175dc7af6d5fdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.523743   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key ...
	I0313 23:45:03.523756   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key: {Name:mk73767ffed852771d73580f3602a0d681fcd72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.523853   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea
	I0313 23:45:03.523869   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.254]
	I0313 23:45:03.692236   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea ...
	I0313 23:45:03.692267   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea: {Name:mk0792f22ba1e3bfeb549ffac82f09e7bc61c64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.692449   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea ...
	I0313 23:45:03.692465   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea: {Name:mka46a2ab563858a9ee7a9ac8ce0c41365de723d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.692566   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:45:03.692664   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:45:03.692718   22414 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:45:03.692733   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt with IP's: []
	I0313 23:45:03.821644   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt ...
	I0313 23:45:03.821673   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt: {Name:mk07f7b2b9ef33712403e38fb81f6fcd2fb94470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.821856   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key ...
	I0313 23:45:03.821871   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key: {Name:mkdbea9fe12c0064266a4011897ec2b342b77dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.821961   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:45:03.821980   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:45:03.821998   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:45:03.822012   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:45:03.822022   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:45:03.822034   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:45:03.822044   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:45:03.822054   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:45:03.822099   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:45:03.822132   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:45:03.822141   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:45:03.822163   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:45:03.822184   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:45:03.822207   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:45:03.822240   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:03.822268   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:03.822280   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:45:03.822292   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:45:03.822894   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:45:03.854236   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:45:03.881213   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:45:03.908312   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:45:03.936099   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0313 23:45:03.963287   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:45:03.989799   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:45:04.017256   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:45:04.043978   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:45:04.071641   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:45:04.098975   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:45:04.126675   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:45:04.144841   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:45:04.151112   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:45:04.166310   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.178399   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.178462   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.185037   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:45:04.212789   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:45:04.225872   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.230723   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.230799   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.236814   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:45:04.250520   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:45:04.261446   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.266063   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.266103   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.271954   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:45:04.283442   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:45:04.287981   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:45:04.288041   22414 kubeadm.go:391] StartCluster: {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:45:04.288116   22414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:45:04.288165   22414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:45:04.327317   22414 cri.go:89] found id: ""
	I0313 23:45:04.327400   22414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0313 23:45:04.338063   22414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0313 23:45:04.348309   22414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0313 23:45:04.358143   22414 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0313 23:45:04.358164   22414 kubeadm.go:156] found existing configuration files:
	
	I0313 23:45:04.358206   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0313 23:45:04.367644   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0313 23:45:04.367739   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0313 23:45:04.377487   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0313 23:45:04.386536   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0313 23:45:04.386584   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0313 23:45:04.396399   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0313 23:45:04.406760   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0313 23:45:04.406837   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0313 23:45:04.417253   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0313 23:45:04.426848   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0313 23:45:04.426893   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0313 23:45:04.436723   22414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0313 23:45:04.687640   22414 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0313 23:45:16.757895   22414 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0313 23:45:16.757969   22414 kubeadm.go:309] [preflight] Running pre-flight checks
	I0313 23:45:16.758047   22414 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0313 23:45:16.758127   22414 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0313 23:45:16.758210   22414 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0313 23:45:16.758307   22414 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0313 23:45:16.759942   22414 out.go:204]   - Generating certificates and keys ...
	I0313 23:45:16.760040   22414 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0313 23:45:16.760114   22414 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0313 23:45:16.760214   22414 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0313 23:45:16.760320   22414 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0313 23:45:16.760409   22414 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0313 23:45:16.760482   22414 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0313 23:45:16.760569   22414 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0313 23:45:16.760749   22414 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-504633 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I0313 23:45:16.760824   22414 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0313 23:45:16.760941   22414 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-504633 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I0313 23:45:16.761037   22414 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0313 23:45:16.761115   22414 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0313 23:45:16.761187   22414 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0313 23:45:16.761270   22414 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0313 23:45:16.761354   22414 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0313 23:45:16.761428   22414 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0313 23:45:16.761557   22414 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0313 23:45:16.761648   22414 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0313 23:45:16.761748   22414 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0313 23:45:16.761838   22414 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0313 23:45:16.763193   22414 out.go:204]   - Booting up control plane ...
	I0313 23:45:16.763286   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0313 23:45:16.763347   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0313 23:45:16.763399   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0313 23:45:16.763480   22414 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0313 23:45:16.763558   22414 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0313 23:45:16.763590   22414 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0313 23:45:16.763710   22414 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0313 23:45:16.763768   22414 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.108666 seconds
	I0313 23:45:16.763860   22414 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0313 23:45:16.763959   22414 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0313 23:45:16.764015   22414 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0313 23:45:16.764153   22414 kubeadm.go:309] [mark-control-plane] Marking the node ha-504633 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0313 23:45:16.764199   22414 kubeadm.go:309] [bootstrap-token] Using token: setsml.ffo6177g1a5h04fn
	I0313 23:45:16.765598   22414 out.go:204]   - Configuring RBAC rules ...
	I0313 23:45:16.765698   22414 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0313 23:45:16.765764   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0313 23:45:16.765872   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0313 23:45:16.765975   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0313 23:45:16.766061   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0313 23:45:16.766133   22414 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0313 23:45:16.766226   22414 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0313 23:45:16.766261   22414 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0313 23:45:16.766301   22414 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0313 23:45:16.766306   22414 kubeadm.go:309] 
	I0313 23:45:16.766372   22414 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0313 23:45:16.766387   22414 kubeadm.go:309] 
	I0313 23:45:16.766461   22414 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0313 23:45:16.766470   22414 kubeadm.go:309] 
	I0313 23:45:16.766510   22414 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0313 23:45:16.766578   22414 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0313 23:45:16.766648   22414 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0313 23:45:16.766658   22414 kubeadm.go:309] 
	I0313 23:45:16.766708   22414 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0313 23:45:16.766714   22414 kubeadm.go:309] 
	I0313 23:45:16.766768   22414 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0313 23:45:16.766774   22414 kubeadm.go:309] 
	I0313 23:45:16.766835   22414 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0313 23:45:16.766939   22414 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0313 23:45:16.767041   22414 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0313 23:45:16.767058   22414 kubeadm.go:309] 
	I0313 23:45:16.767170   22414 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0313 23:45:16.767289   22414 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0313 23:45:16.767302   22414 kubeadm.go:309] 
	I0313 23:45:16.767375   22414 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token setsml.ffo6177g1a5h04fn \
	I0313 23:45:16.767468   22414 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c \
	I0313 23:45:16.767487   22414 kubeadm.go:309] 	--control-plane 
	I0313 23:45:16.767494   22414 kubeadm.go:309] 
	I0313 23:45:16.767575   22414 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0313 23:45:16.767582   22414 kubeadm.go:309] 
	I0313 23:45:16.767648   22414 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token setsml.ffo6177g1a5h04fn \
	I0313 23:45:16.767758   22414 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c 
	I0313 23:45:16.767777   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:45:16.767785   22414 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0313 23:45:16.769361   22414 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0313 23:45:16.770584   22414 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0313 23:45:16.800031   22414 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0313 23:45:16.800058   22414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0313 23:45:16.858285   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0313 23:45:18.072367   22414 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.214046702s)
	I0313 23:45:18.072402   22414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0313 23:45:18.072513   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:18.072523   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633 minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=true
	I0313 23:45:18.094143   22414 ops.go:34] apiserver oom_adj: -16
	I0313 23:45:18.211662   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:18.712676   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:19.212460   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:19.712365   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:20.212342   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:20.712345   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:21.211885   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:21.712221   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:22.212036   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:22.712503   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:23.212718   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:23.712380   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:24.212002   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:24.712095   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:25.212706   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:25.711819   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:26.211934   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:26.712616   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:27.212316   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:27.712353   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:28.212015   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:28.301575   22414 kubeadm.go:1106] duration metric: took 10.229136912s to wait for elevateKubeSystemPrivileges
	W0313 23:45:28.301614   22414 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0313 23:45:28.301620   22414 kubeadm.go:393] duration metric: took 24.013585791s to StartCluster
	I0313 23:45:28.301644   22414 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:28.301730   22414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:45:28.302366   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:28.302599   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0313 23:45:28.302617   22414 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0313 23:45:28.302659   22414 addons.go:69] Setting storage-provisioner=true in profile "ha-504633"
	I0313 23:45:28.302689   22414 addons.go:234] Setting addon storage-provisioner=true in "ha-504633"
	I0313 23:45:28.302717   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:28.302601   22414 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:45:28.302737   22414 addons.go:69] Setting default-storageclass=true in profile "ha-504633"
	I0313 23:45:28.302783   22414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-504633"
	I0313 23:45:28.302738   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:45:28.302868   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:28.303137   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.303167   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.303186   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.303225   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.318189   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0313 23:45:28.318523   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0313 23:45:28.318746   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.318872   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.319335   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.319351   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.319483   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.319515   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.319711   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.319876   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.320111   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.320302   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.320346   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.322500   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:45:28.322890   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0313 23:45:28.323456   22414 cert_rotation.go:137] Starting client certificate rotation controller
	I0313 23:45:28.323664   22414 addons.go:234] Setting addon default-storageclass=true in "ha-504633"
	I0313 23:45:28.323708   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:28.324095   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.324141   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.336407   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37349
	I0313 23:45:28.336828   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.337371   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.337399   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.337759   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.338060   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.339181   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0313 23:45:28.339571   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.339905   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:28.340031   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.340057   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.342220   22414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0313 23:45:28.340411   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.343730   22414 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:45:28.343755   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0313 23:45:28.343774   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:28.344781   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.344820   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.346887   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.347368   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:28.347405   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.347592   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:28.347791   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:28.347935   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:28.348057   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:28.360391   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0313 23:45:28.360829   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.361252   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.361278   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.361645   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.361820   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.363624   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:28.363850   22414 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0313 23:45:28.363869   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0313 23:45:28.363886   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:28.366432   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.366836   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:28.366860   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.367105   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:28.367310   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:28.367472   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:28.367632   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:28.442095   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0313 23:45:28.496536   22414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:45:28.562265   22414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0313 23:45:28.999817   22414 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0313 23:45:29.337048   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337080   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337057   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337136   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337366   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337378   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337386   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337393   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337406   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337440   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337462   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337470   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337588   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337601   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337711   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337732   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337748   22414 main.go:141] libmachine: (ha-504633) DBG | Closing plugin on server side
	I0313 23:45:29.337852   22414 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0313 23:45:29.337865   22414 round_trippers.go:469] Request Headers:
	I0313 23:45:29.337876   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:45:29.337887   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:45:29.348574   22414 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0313 23:45:29.349129   22414 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0313 23:45:29.349144   22414 round_trippers.go:469] Request Headers:
	I0313 23:45:29.349155   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:45:29.349161   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:45:29.349167   22414 round_trippers.go:473]     Content-Type: application/json
	I0313 23:45:29.351890   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:45:29.352094   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.352108   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.352378   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.352397   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.354171   22414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0313 23:45:29.355441   22414 addons.go:505] duration metric: took 1.052819857s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0313 23:45:29.355475   22414 start.go:245] waiting for cluster config update ...
	I0313 23:45:29.355487   22414 start.go:254] writing updated cluster config ...
	I0313 23:45:29.356919   22414 out.go:177] 
	I0313 23:45:29.358206   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:29.358266   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:29.359776   22414 out.go:177] * Starting "ha-504633-m02" control-plane node in "ha-504633" cluster
	I0313 23:45:29.360982   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:45:29.361010   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:45:29.361103   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:45:29.361119   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:45:29.361214   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:29.361431   22414 start.go:360] acquireMachinesLock for ha-504633-m02: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:45:29.361489   22414 start.go:364] duration metric: took 33.897µs to acquireMachinesLock for "ha-504633-m02"
	I0313 23:45:29.361510   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:45:29.361603   22414 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0313 23:45:29.364235   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:45:29.364321   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:29.364353   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:29.378585   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0313 23:45:29.379096   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:29.379628   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:29.379656   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:29.379951   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:29.380134   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:29.380265   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:29.380459   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:45:29.380501   22414 client.go:168] LocalClient.Create starting
	I0313 23:45:29.380566   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:45:29.380611   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:45:29.380631   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:45:29.380677   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:45:29.380700   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:45:29.380710   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:45:29.380726   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:45:29.380735   22414 main.go:141] libmachine: (ha-504633-m02) Calling .PreCreateCheck
	I0313 23:45:29.380897   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:29.381338   22414 main.go:141] libmachine: Creating machine...
	I0313 23:45:29.381353   22414 main.go:141] libmachine: (ha-504633-m02) Calling .Create
	I0313 23:45:29.381489   22414 main.go:141] libmachine: (ha-504633-m02) Creating KVM machine...
	I0313 23:45:29.382723   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found existing default KVM network
	I0313 23:45:29.382860   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found existing private KVM network mk-ha-504633
	I0313 23:45:29.383024   22414 main.go:141] libmachine: (ha-504633-m02) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 ...
	I0313 23:45:29.383049   22414 main.go:141] libmachine: (ha-504633-m02) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:45:29.383124   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.383015   22745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:45:29.383220   22414 main.go:141] libmachine: (ha-504633-m02) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:45:29.603731   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.603540   22745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa...
	I0313 23:45:29.716976   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.716867   22745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/ha-504633-m02.rawdisk...
	I0313 23:45:29.717033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Writing magic tar header
	I0313 23:45:29.717049   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Writing SSH key tar header
	I0313 23:45:29.717061   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.717001   22745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 ...
	I0313 23:45:29.717163   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02
	I0313 23:45:29.717204   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:45:29.717222   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 (perms=drwx------)
	I0313 23:45:29.717237   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:45:29.717264   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:45:29.717282   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:45:29.717296   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:45:29.717308   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:45:29.717323   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home
	I0313 23:45:29.717334   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Skipping /home - not owner
	I0313 23:45:29.717352   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:45:29.717369   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:45:29.717380   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:45:29.717389   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:45:29.717402   22414 main.go:141] libmachine: (ha-504633-m02) Creating domain...
	I0313 23:45:29.718283   22414 main.go:141] libmachine: (ha-504633-m02) define libvirt domain using xml: 
	I0313 23:45:29.718304   22414 main.go:141] libmachine: (ha-504633-m02) <domain type='kvm'>
	I0313 23:45:29.718315   22414 main.go:141] libmachine: (ha-504633-m02)   <name>ha-504633-m02</name>
	I0313 23:45:29.718323   22414 main.go:141] libmachine: (ha-504633-m02)   <memory unit='MiB'>2200</memory>
	I0313 23:45:29.718337   22414 main.go:141] libmachine: (ha-504633-m02)   <vcpu>2</vcpu>
	I0313 23:45:29.718348   22414 main.go:141] libmachine: (ha-504633-m02)   <features>
	I0313 23:45:29.718360   22414 main.go:141] libmachine: (ha-504633-m02)     <acpi/>
	I0313 23:45:29.718370   22414 main.go:141] libmachine: (ha-504633-m02)     <apic/>
	I0313 23:45:29.718398   22414 main.go:141] libmachine: (ha-504633-m02)     <pae/>
	I0313 23:45:29.718419   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718433   22414 main.go:141] libmachine: (ha-504633-m02)   </features>
	I0313 23:45:29.718445   22414 main.go:141] libmachine: (ha-504633-m02)   <cpu mode='host-passthrough'>
	I0313 23:45:29.718457   22414 main.go:141] libmachine: (ha-504633-m02)   
	I0313 23:45:29.718467   22414 main.go:141] libmachine: (ha-504633-m02)   </cpu>
	I0313 23:45:29.718481   22414 main.go:141] libmachine: (ha-504633-m02)   <os>
	I0313 23:45:29.718497   22414 main.go:141] libmachine: (ha-504633-m02)     <type>hvm</type>
	I0313 23:45:29.718510   22414 main.go:141] libmachine: (ha-504633-m02)     <boot dev='cdrom'/>
	I0313 23:45:29.718521   22414 main.go:141] libmachine: (ha-504633-m02)     <boot dev='hd'/>
	I0313 23:45:29.718534   22414 main.go:141] libmachine: (ha-504633-m02)     <bootmenu enable='no'/>
	I0313 23:45:29.718546   22414 main.go:141] libmachine: (ha-504633-m02)   </os>
	I0313 23:45:29.718557   22414 main.go:141] libmachine: (ha-504633-m02)   <devices>
	I0313 23:45:29.718567   22414 main.go:141] libmachine: (ha-504633-m02)     <disk type='file' device='cdrom'>
	I0313 23:45:29.718600   22414 main.go:141] libmachine: (ha-504633-m02)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/boot2docker.iso'/>
	I0313 23:45:29.718625   22414 main.go:141] libmachine: (ha-504633-m02)       <target dev='hdc' bus='scsi'/>
	I0313 23:45:29.718638   22414 main.go:141] libmachine: (ha-504633-m02)       <readonly/>
	I0313 23:45:29.718647   22414 main.go:141] libmachine: (ha-504633-m02)     </disk>
	I0313 23:45:29.718659   22414 main.go:141] libmachine: (ha-504633-m02)     <disk type='file' device='disk'>
	I0313 23:45:29.718670   22414 main.go:141] libmachine: (ha-504633-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:45:29.718686   22414 main.go:141] libmachine: (ha-504633-m02)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/ha-504633-m02.rawdisk'/>
	I0313 23:45:29.718699   22414 main.go:141] libmachine: (ha-504633-m02)       <target dev='hda' bus='virtio'/>
	I0313 23:45:29.718710   22414 main.go:141] libmachine: (ha-504633-m02)     </disk>
	I0313 23:45:29.718721   22414 main.go:141] libmachine: (ha-504633-m02)     <interface type='network'>
	I0313 23:45:29.718734   22414 main.go:141] libmachine: (ha-504633-m02)       <source network='mk-ha-504633'/>
	I0313 23:45:29.718744   22414 main.go:141] libmachine: (ha-504633-m02)       <model type='virtio'/>
	I0313 23:45:29.718752   22414 main.go:141] libmachine: (ha-504633-m02)     </interface>
	I0313 23:45:29.718781   22414 main.go:141] libmachine: (ha-504633-m02)     <interface type='network'>
	I0313 23:45:29.718793   22414 main.go:141] libmachine: (ha-504633-m02)       <source network='default'/>
	I0313 23:45:29.718819   22414 main.go:141] libmachine: (ha-504633-m02)       <model type='virtio'/>
	I0313 23:45:29.718830   22414 main.go:141] libmachine: (ha-504633-m02)     </interface>
	I0313 23:45:29.718837   22414 main.go:141] libmachine: (ha-504633-m02)     <serial type='pty'>
	I0313 23:45:29.718847   22414 main.go:141] libmachine: (ha-504633-m02)       <target port='0'/>
	I0313 23:45:29.718858   22414 main.go:141] libmachine: (ha-504633-m02)     </serial>
	I0313 23:45:29.718864   22414 main.go:141] libmachine: (ha-504633-m02)     <console type='pty'>
	I0313 23:45:29.718876   22414 main.go:141] libmachine: (ha-504633-m02)       <target type='serial' port='0'/>
	I0313 23:45:29.718886   22414 main.go:141] libmachine: (ha-504633-m02)     </console>
	I0313 23:45:29.718897   22414 main.go:141] libmachine: (ha-504633-m02)     <rng model='virtio'>
	I0313 23:45:29.718910   22414 main.go:141] libmachine: (ha-504633-m02)       <backend model='random'>/dev/random</backend>
	I0313 23:45:29.718920   22414 main.go:141] libmachine: (ha-504633-m02)     </rng>
	I0313 23:45:29.718928   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718936   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718943   22414 main.go:141] libmachine: (ha-504633-m02)   </devices>
	I0313 23:45:29.718958   22414 main.go:141] libmachine: (ha-504633-m02) </domain>
	I0313 23:45:29.718972   22414 main.go:141] libmachine: (ha-504633-m02) 
	I0313 23:45:29.725679   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:1b:93:1d in network default
	I0313 23:45:29.726416   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring networks are active...
	I0313 23:45:29.726445   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:29.727361   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring network default is active
	I0313 23:45:29.727707   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring network mk-ha-504633 is active
	I0313 23:45:29.728118   22414 main.go:141] libmachine: (ha-504633-m02) Getting domain xml...
	I0313 23:45:29.728982   22414 main.go:141] libmachine: (ha-504633-m02) Creating domain...
	I0313 23:45:30.923815   22414 main.go:141] libmachine: (ha-504633-m02) Waiting to get IP...
	I0313 23:45:30.924814   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:30.925187   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:30.925242   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:30.925181   22745 retry.go:31] will retry after 238.667554ms: waiting for machine to come up
	I0313 23:45:31.165691   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.166088   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.166122   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.166033   22745 retry.go:31] will retry after 269.695339ms: waiting for machine to come up
	I0313 23:45:31.437724   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.438322   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.438349   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.438262   22745 retry.go:31] will retry after 332.684451ms: waiting for machine to come up
	I0313 23:45:31.772916   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.773484   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.773528   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.773464   22745 retry.go:31] will retry after 528.114207ms: waiting for machine to come up
	I0313 23:45:32.303074   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:32.303578   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:32.303606   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:32.303529   22745 retry.go:31] will retry after 535.466395ms: waiting for machine to come up
	I0313 23:45:32.840325   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:32.840800   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:32.840825   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:32.840766   22745 retry.go:31] will retry after 815.467153ms: waiting for machine to come up
	I0313 23:45:33.657736   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:33.658193   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:33.658222   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:33.658155   22745 retry.go:31] will retry after 1.127123157s: waiting for machine to come up
	I0313 23:45:34.786490   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:34.786971   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:34.786997   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:34.786924   22745 retry.go:31] will retry after 1.006211279s: waiting for machine to come up
	I0313 23:45:35.794544   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:35.795021   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:35.795048   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:35.794982   22745 retry.go:31] will retry after 1.316637901s: waiting for machine to come up
	I0313 23:45:37.112803   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:37.113413   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:37.113436   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:37.113364   22745 retry.go:31] will retry after 1.641628067s: waiting for machine to come up
	I0313 23:45:38.758555   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:38.759025   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:38.759054   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:38.758971   22745 retry.go:31] will retry after 2.686943951s: waiting for machine to come up
	I0313 23:45:41.447850   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:41.448244   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:41.448267   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:41.448220   22745 retry.go:31] will retry after 3.433942106s: waiting for machine to come up
	I0313 23:45:44.883689   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:44.884110   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:44.884182   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:44.884065   22745 retry.go:31] will retry after 2.774438793s: waiting for machine to come up
	I0313 23:45:47.661899   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:47.662308   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:47.662325   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:47.662284   22745 retry.go:31] will retry after 4.804089976s: waiting for machine to come up
	I0313 23:45:52.469740   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.470264   22414 main.go:141] libmachine: (ha-504633-m02) Found IP for machine: 192.168.39.47
	I0313 23:45:52.470291   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has current primary IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.470300   22414 main.go:141] libmachine: (ha-504633-m02) Reserving static IP address...
	I0313 23:45:52.470665   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find host DHCP lease matching {name: "ha-504633-m02", mac: "52:54:00:56:27:e8", ip: "192.168.39.47"} in network mk-ha-504633
	I0313 23:45:52.542352   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Getting to WaitForSSH function...
	I0313 23:45:52.542383   22414 main.go:141] libmachine: (ha-504633-m02) Reserved static IP address: 192.168.39.47
	I0313 23:45:52.542397   22414 main.go:141] libmachine: (ha-504633-m02) Waiting for SSH to be available...
	I0313 23:45:52.544842   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.545119   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633
	I0313 23:45:52.545145   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find defined IP address of network mk-ha-504633 interface with MAC address 52:54:00:56:27:e8
	I0313 23:45:52.545264   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH client type: external
	I0313 23:45:52.545292   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa (-rw-------)
	I0313 23:45:52.545351   22414 main.go:141] libmachine: (ha-504633-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:45:52.545369   22414 main.go:141] libmachine: (ha-504633-m02) DBG | About to run SSH command:
	I0313 23:45:52.545384   22414 main.go:141] libmachine: (ha-504633-m02) DBG | exit 0
	I0313 23:45:52.548978   22414 main.go:141] libmachine: (ha-504633-m02) DBG | SSH cmd err, output: exit status 255: 
	I0313 23:45:52.548999   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0313 23:45:52.549010   22414 main.go:141] libmachine: (ha-504633-m02) DBG | command : exit 0
	I0313 23:45:52.549018   22414 main.go:141] libmachine: (ha-504633-m02) DBG | err     : exit status 255
	I0313 23:45:52.549028   22414 main.go:141] libmachine: (ha-504633-m02) DBG | output  : 
	I0313 23:45:55.550013   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Getting to WaitForSSH function...
	I0313 23:45:55.552387   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.552726   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.552754   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.552858   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH client type: external
	I0313 23:45:55.552886   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa (-rw-------)
	I0313 23:45:55.552915   22414 main.go:141] libmachine: (ha-504633-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:45:55.552931   22414 main.go:141] libmachine: (ha-504633-m02) DBG | About to run SSH command:
	I0313 23:45:55.552943   22414 main.go:141] libmachine: (ha-504633-m02) DBG | exit 0
	I0313 23:45:55.675034   22414 main.go:141] libmachine: (ha-504633-m02) DBG | SSH cmd err, output: <nil>: 
	I0313 23:45:55.675284   22414 main.go:141] libmachine: (ha-504633-m02) KVM machine creation complete!
	I0313 23:45:55.675599   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:55.676128   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:55.676317   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:55.676471   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:45:55.676484   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:45:55.677718   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:45:55.677734   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:45:55.677742   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:45:55.677766   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.680082   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.680479   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.680504   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.680679   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.680884   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.681098   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.681273   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.681495   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.681754   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.681768   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:45:55.782454   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:45:55.782482   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:45:55.782493   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.785256   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.785696   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.785729   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.785941   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.786136   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.786318   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.786495   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.786643   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.786829   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.786840   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:45:55.887744   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:45:55.887802   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:45:55.887809   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:45:55.887821   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:55.888053   22414 buildroot.go:166] provisioning hostname "ha-504633-m02"
	I0313 23:45:55.888083   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:55.888227   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.890727   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.891153   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.891191   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.891330   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.891546   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.891723   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.891920   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.892163   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.892333   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.892352   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633-m02 && echo "ha-504633-m02" | sudo tee /etc/hostname
	I0313 23:45:56.010147   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633-m02
	
	I0313 23:45:56.010190   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.013048   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.013390   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.013416   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.013576   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.013765   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.013986   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.014146   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.014351   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.014510   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.014526   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:45:56.128696   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:45:56.128724   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:45:56.128740   22414 buildroot.go:174] setting up certificates
	I0313 23:45:56.128751   22414 provision.go:84] configureAuth start
	I0313 23:45:56.128759   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:56.129076   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.132033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.132490   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.132514   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.132662   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.135115   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.135472   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.135500   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.135645   22414 provision.go:143] copyHostCerts
	I0313 23:45:56.135691   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:45:56.135734   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:45:56.135743   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:45:56.135812   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:45:56.135891   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:45:56.135908   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:45:56.135914   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:45:56.135936   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:45:56.135986   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:45:56.136002   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:45:56.136008   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:45:56.136027   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:45:56.136071   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633-m02 san=[127.0.0.1 192.168.39.47 ha-504633-m02 localhost minikube]
	I0313 23:45:56.258650   22414 provision.go:177] copyRemoteCerts
	I0313 23:45:56.258701   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:45:56.258721   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.261365   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.261837   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.261866   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.262046   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.262301   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.262483   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.262611   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.342093   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:45:56.342157   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:45:56.368693   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:45:56.368758   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:45:56.394391   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:45:56.394467   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0313 23:45:56.421029   22414 provision.go:87] duration metric: took 292.265613ms to configureAuth
	I0313 23:45:56.421058   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:45:56.421284   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:56.421372   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.423816   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.424184   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.424232   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.424344   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.424557   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.424713   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.424824   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.424987   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.425185   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.425203   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:45:56.696023   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:45:56.696055   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:45:56.696063   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetURL
	I0313 23:45:56.697304   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using libvirt version 6000000
	I0313 23:45:56.699333   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.699763   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.699800   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.699938   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:45:56.699956   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:45:56.699963   22414 client.go:171] duration metric: took 27.319451348s to LocalClient.Create
	I0313 23:45:56.699987   22414 start.go:167] duration metric: took 27.319533471s to libmachine.API.Create "ha-504633"
	I0313 23:45:56.700000   22414 start.go:293] postStartSetup for "ha-504633-m02" (driver="kvm2")
	I0313 23:45:56.700014   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:45:56.700034   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.700297   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:45:56.700317   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.702924   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.703363   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.703390   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.703602   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.703803   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.703990   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.704152   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.791654   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:45:56.795967   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:45:56.795988   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:45:56.796046   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:45:56.796116   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:45:56.796127   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:45:56.796210   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:45:56.807866   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:56.832492   22414 start.go:296] duration metric: took 132.481015ms for postStartSetup
	I0313 23:45:56.832538   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:56.833113   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.836449   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.836977   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.837009   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.837753   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:56.838006   22414 start.go:128] duration metric: took 27.47639171s to createHost
	I0313 23:45:56.838042   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.841352   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.841776   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.841819   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.842116   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.842351   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.842578   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.842840   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.843046   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.843213   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.843225   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:45:56.944146   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373556.932699776
	
	I0313 23:45:56.944171   22414 fix.go:216] guest clock: 1710373556.932699776
	I0313 23:45:56.944179   22414 fix.go:229] Guest: 2024-03-13 23:45:56.932699776 +0000 UTC Remote: 2024-03-13 23:45:56.838022897 +0000 UTC m=+84.760506472 (delta=94.676879ms)
	I0313 23:45:56.944193   22414 fix.go:200] guest clock delta is within tolerance: 94.676879ms
	I0313 23:45:56.944198   22414 start.go:83] releasing machines lock for "ha-504633-m02", held for 27.582698737s
	I0313 23:45:56.944222   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.944477   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.947033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.947343   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.947368   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.949909   22414 out.go:177] * Found network options:
	I0313 23:45:56.951494   22414 out.go:177]   - NO_PROXY=192.168.39.31
	W0313 23:45:56.952816   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:45:56.952844   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953409   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953577   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953657   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:45:56.953684   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	W0313 23:45:56.953775   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:45:56.953866   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:45:56.953890   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.956221   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956342   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956585   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.956609   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956809   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.956843   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.956867   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956990   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.957026   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.957168   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.957172   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.957355   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.957375   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.957516   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:57.205902   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:45:57.213432   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:45:57.213491   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:45:57.231091   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:45:57.231116   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:45:57.231196   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:45:57.253572   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:45:57.271181   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:45:57.271239   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:45:57.288471   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:45:57.303377   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:45:57.426602   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:45:57.570060   22414 docker.go:233] disabling docker service ...
	I0313 23:45:57.570122   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:45:57.585409   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:45:57.599737   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:45:57.744130   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:45:57.879672   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:45:57.895152   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:45:57.917190   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:45:57.917246   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.927977   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:45:57.928037   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.939210   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.950885   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.961971   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:45:57.972987   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:45:57.983426   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:45:57.983487   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:45:57.998585   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:45:58.009366   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:45:58.139276   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:45:58.281865   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:45:58.281928   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:45:58.287730   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:45:58.287785   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:45:58.291722   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:45:58.337522   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:45:58.337611   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:45:58.367229   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:45:58.398548   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:45:58.400312   22414 out.go:177]   - env NO_PROXY=192.168.39.31
	I0313 23:45:58.401870   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:58.404617   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:58.404971   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:58.405011   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:58.405222   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:45:58.409573   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:45:58.423481   22414 mustload.go:65] Loading cluster: ha-504633
	I0313 23:45:58.423707   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:58.423978   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:58.424031   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:58.439929   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0313 23:45:58.440387   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:58.440818   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:58.440830   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:58.441179   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:58.441397   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:58.442888   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:58.443281   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:58.443325   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:58.457760   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0313 23:45:58.458138   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:58.458582   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:58.458603   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:58.458964   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:58.459181   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:58.459357   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.47
	I0313 23:45:58.459368   22414 certs.go:194] generating shared ca certs ...
	I0313 23:45:58.459385   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.459499   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:45:58.459543   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:45:58.459557   22414 certs.go:256] generating profile certs ...
	I0313 23:45:58.459658   22414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:45:58.459693   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047
	I0313 23:45:58.459713   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.254]
	I0313 23:45:58.628806   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 ...
	I0313 23:45:58.628834   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047: {Name:mkd54a5480bd97529ebe7020139c2848ba457963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.629051   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047 ...
	I0313 23:45:58.629073   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047: {Name:mk72b13edd0ebac2393b4342e658100af58f8806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.629179   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:45:58.629311   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:45:58.629440   22414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:45:58.629461   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:45:58.629482   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:45:58.629497   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:45:58.629512   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:45:58.629533   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:45:58.629552   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:45:58.629568   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:45:58.629585   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:45:58.629662   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:45:58.629709   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:45:58.629723   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:45:58.629759   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:45:58.629791   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:45:58.629822   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:45:58.629877   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:58.629919   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:45:58.629939   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:45:58.629958   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:58.629999   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:58.632843   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:58.633272   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:58.633300   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:58.633450   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:58.633621   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:58.633826   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:58.633930   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:58.711289   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0313 23:45:58.716912   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0313 23:45:58.730305   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0313 23:45:58.735062   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0313 23:45:58.747692   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0313 23:45:58.752517   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0313 23:45:58.764245   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0313 23:45:58.768381   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0313 23:45:58.779388   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0313 23:45:58.787223   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0313 23:45:58.798951   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0313 23:45:58.803650   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0313 23:45:58.815944   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:45:58.842948   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:45:58.868043   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:45:58.892434   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:45:58.916983   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0313 23:45:58.944221   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:45:58.970408   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:45:58.995916   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:45:59.023224   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:45:59.049254   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:45:59.075856   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:45:59.102308   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0313 23:45:59.120294   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0313 23:45:59.138109   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0313 23:45:59.155731   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0313 23:45:59.173865   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0313 23:45:59.193376   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0313 23:45:59.212013   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0313 23:45:59.230226   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:45:59.236022   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:45:59.248252   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.252999   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.253051   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.258801   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:45:59.270271   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:45:59.281980   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.287105   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.287186   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.293179   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:45:59.305730   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:45:59.317958   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.323143   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.323207   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.329074   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:45:59.341607   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:45:59.346065   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:45:59.346121   22414 kubeadm.go:928] updating node {m02 192.168.39.47 8443 v1.28.4 crio true true} ...
	I0313 23:45:59.346266   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:45:59.346311   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:45:59.346349   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:45:59.346408   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:59.358406   22414 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0313 23:45:59.358470   22414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:59.369388   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0313 23:45:59.369416   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:45:59.369482   22414 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0313 23:45:59.369530   22414 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0313 23:45:59.369489   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:45:59.374180   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0313 23:45:59.374206   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0313 23:46:32.285539   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:46:32.285615   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:46:32.290952   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0313 23:46:32.290985   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0313 23:47:11.199243   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:47:11.217845   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:47:11.217934   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:47:11.222619   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0313 23:47:11.222649   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0313 23:47:11.680337   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0313 23:47:11.690162   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0313 23:47:11.708141   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:47:11.725505   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:47:11.743502   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:47:11.747926   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:47:11.761386   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:47:11.887854   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:47:11.905728   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:47:11.906198   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:47:11.906249   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:47:11.921485   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0313 23:47:11.921982   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:47:11.922510   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:47:11.922540   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:47:11.922874   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:47:11.923041   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:47:11.923241   22414 start.go:316] joinCluster: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:47:11.923367   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0313 23:47:11.923391   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:47:11.926804   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:47:11.927193   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:47:11.927221   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:47:11.927349   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:47:11.927534   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:47:11.927701   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:47:11.927852   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:47:12.099969   22414 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:47:12.100021   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3aq05d.mnsuf0499qv3j76i --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m02 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I0313 23:47:51.475828   22414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3aq05d.mnsuf0499qv3j76i --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m02 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (39.375779052s)
	I0313 23:47:51.475861   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0313 23:47:51.985920   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633-m02 minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=false
	I0313 23:47:52.121205   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-504633-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0313 23:47:52.237528   22414 start.go:318] duration metric: took 40.314283326s to joinCluster
	I0313 23:47:52.237605   22414 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:47:52.239662   22414 out.go:177] * Verifying Kubernetes components...
	I0313 23:47:52.237861   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:47:52.241375   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:47:52.457661   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:47:52.477311   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:47:52.477567   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0313 23:47:52.477627   22414 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.31:8443
	I0313 23:47:52.477839   22414 node_ready.go:35] waiting up to 6m0s for node "ha-504633-m02" to be "Ready" ...
	I0313 23:47:52.477935   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:52.477946   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:52.477957   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:52.477964   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:52.493113   22414 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0313 23:47:52.978332   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:52.978352   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:52.978360   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:52.978365   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:52.983229   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:53.478574   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:53.478595   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:53.478607   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:53.478611   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:53.483227   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:53.978501   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:53.978524   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:53.978533   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:53.978538   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:53.982322   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:54.478972   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:54.478996   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:54.479006   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:54.479012   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:54.483583   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:54.484503   22414 node_ready.go:53] node "ha-504633-m02" has status "Ready":"False"
	I0313 23:47:54.978515   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:54.978537   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:54.978545   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:54.978549   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:54.983933   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:55.478164   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:55.478189   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:55.478198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:55.478204   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:55.482562   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:55.979021   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:55.979049   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:55.979061   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:55.979065   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:55.982296   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:56.478032   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:56.478058   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:56.478069   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:56.478073   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:56.481921   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:56.978113   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:56.978135   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:56.978143   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:56.978146   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:56.983349   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:56.983967   22414 node_ready.go:53] node "ha-504633-m02" has status "Ready":"False"
	I0313 23:47:57.478829   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:57.478861   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:57.478872   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:57.478877   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:57.483587   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:57.978341   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:57.978362   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:57.978372   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:57.978378   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:57.982655   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:58.478375   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:58.478397   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.478407   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.478414   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.482079   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.978321   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:58.978344   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.978351   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.978355   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.982057   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.982809   22414 node_ready.go:49] node "ha-504633-m02" has status "Ready":"True"
	I0313 23:47:58.982826   22414 node_ready.go:38] duration metric: took 6.504971274s for node "ha-504633-m02" to be "Ready" ...
	I0313 23:47:58.982836   22414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:47:58.982917   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:47:58.982928   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.982935   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.982938   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.988207   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:58.994146   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:58.994211   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dbkfv
	I0313 23:47:58.994219   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.994226   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.994239   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.998051   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.999095   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:58.999110   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.999117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.999122   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.003197   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:59.003782   22414 pod_ready.go:92] pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.003797   22414 pod_ready.go:81] duration metric: took 9.630585ms for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.003805   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.003864   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hh2kw
	I0313 23:47:59.003874   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.003880   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.003885   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.007817   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.008320   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:59.008334   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.008340   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.008346   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.011206   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.011793   22414 pod_ready.go:92] pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.011809   22414 pod_ready.go:81] duration metric: took 7.998065ms for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.011820   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.011873   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633
	I0313 23:47:59.011881   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.011888   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.011894   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.014563   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.015010   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:59.015023   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.015030   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.015036   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.017377   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.017771   22414 pod_ready.go:92] pod "etcd-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.017785   22414 pod_ready.go:81] duration metric: took 5.95535ms for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.017792   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.017832   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:47:59.017840   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.017847   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.017852   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.020971   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.021669   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:59.021683   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.021689   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.021693   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.024788   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.518843   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:47:59.518866   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.518874   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.518878   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.523108   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:59.523813   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:59.523827   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.523834   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.523837   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.526949   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.018411   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:00.018433   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.018440   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.018444   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.022266   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.023044   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:00.023061   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.023069   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.023072   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.026130   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.518041   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:00.518063   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.518071   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.518075   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.522084   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.522900   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:00.522916   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.522925   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.522929   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.525916   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:01.018526   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:01.018555   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.018566   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.018571   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.022603   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:01.023282   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:01.023302   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.023312   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.023315   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.026348   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:01.026975   22414 pod_ready.go:102] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"False"
	I0313 23:48:01.518286   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:01.518320   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.518328   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.518332   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.522224   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:01.522904   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:01.522917   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.522927   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.522932   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.526296   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.018940   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:02.018962   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.018971   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.018976   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.022847   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.023554   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:02.023568   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.023575   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.023582   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.026974   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.518917   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:02.518948   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.518957   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.518962   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.523360   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:02.524105   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:02.524123   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.524133   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.524139   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.527021   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.017971   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:03.017993   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.018000   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.018006   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.021978   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.022782   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.022798   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.022809   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.022814   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.027600   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:03.028102   22414 pod_ready.go:92] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.028121   22414 pod_ready.go:81] duration metric: took 4.010321625s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.028140   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.028203   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:48:03.028213   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.028224   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.028231   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.031188   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.031749   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.031764   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.031773   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.031778   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.034583   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.035266   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.035285   22414 pod_ready.go:81] duration metric: took 7.136593ms for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.035298   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.035359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:48:03.035370   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.035379   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.035388   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.038372   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.038960   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.038976   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.038985   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.038988   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.042434   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.042966   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.042985   22414 pod_ready.go:81] duration metric: took 7.679023ms for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.042998   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.179376   22414 request.go:629] Waited for 136.309846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:48:03.179447   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:48:03.179452   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.179504   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.179513   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.183195   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.379289   22414 request.go:629] Waited for 195.403269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.379350   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.379358   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.379368   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.379376   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.383468   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:03.384269   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.384287   22414 pod_ready.go:81] duration metric: took 341.281587ms for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.384297   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.579273   22414 request.go:629] Waited for 194.904156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:48:03.579324   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:48:03.579330   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.579338   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.579342   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.583258   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.778723   22414 request.go:629] Waited for 194.4133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.778826   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.778842   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.778852   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.778861   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.782658   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.783399   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.783419   22414 pod_ready.go:81] duration metric: took 399.114651ms for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.783432   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.978697   22414 request.go:629] Waited for 195.188215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:48:03.978751   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:48:03.978756   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.978777   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.978783   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.983524   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.178832   22414 request.go:629] Waited for 194.433461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:04.178904   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:04.178910   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.178918   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.178925   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.182760   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:04.183322   22414 pod_ready.go:92] pod "kube-proxy-4s9t5" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.183341   22414 pod_ready.go:81] duration metric: took 399.902997ms for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.183351   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.379418   22414 request.go:629] Waited for 196.006939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:48:04.379486   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:48:04.379491   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.379498   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.379502   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.383452   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:04.578789   22414 request.go:629] Waited for 194.592749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.578870   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.578881   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.578888   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.578891   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.582983   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.583718   22414 pod_ready.go:92] pod "kube-proxy-j56zl" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.583738   22414 pod_ready.go:81] duration metric: took 400.380755ms for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.583751   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.779022   22414 request.go:629] Waited for 195.183559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:48:04.779098   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:48:04.779105   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.779117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.779129   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.783580   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.978825   22414 request.go:629] Waited for 194.38583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.978877   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.978882   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.978889   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.978894   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.984336   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:48:04.985132   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.985153   22414 pod_ready.go:81] duration metric: took 401.395449ms for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.985163   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:05.179219   22414 request.go:629] Waited for 193.979517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:48:05.179281   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:48:05.179288   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.179296   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.179302   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.182936   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:05.378971   22414 request.go:629] Waited for 195.391408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:05.379022   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:05.379028   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.379034   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.379039   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.383483   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:05.384088   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:05.384107   22414 pod_ready.go:81] duration metric: took 398.938177ms for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:05.384118   22414 pod_ready.go:38] duration metric: took 6.401255852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:48:05.384133   22414 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:48:05.384189   22414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:48:05.402480   22414 api_server.go:72] duration metric: took 13.164836481s to wait for apiserver process to appear ...
	I0313 23:48:05.402502   22414 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:48:05.402519   22414 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I0313 23:48:05.409852   22414 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I0313 23:48:05.409925   22414 round_trippers.go:463] GET https://192.168.39.31:8443/version
	I0313 23:48:05.409931   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.409939   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.409949   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.411250   22414 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0313 23:48:05.411399   22414 api_server.go:141] control plane version: v1.28.4
	I0313 23:48:05.411426   22414 api_server.go:131] duration metric: took 8.915989ms to wait for apiserver health ...
	I0313 23:48:05.411437   22414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:48:05.578895   22414 request.go:629] Waited for 167.356892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.578949   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.578959   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.578970   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.578983   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.585342   22414 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0313 23:48:05.590324   22414 system_pods.go:59] 17 kube-system pods found
	I0313 23:48:05.590350   22414 system_pods.go:61] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:48:05.590354   22414 system_pods.go:61] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:48:05.590358   22414 system_pods.go:61] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:48:05.590362   22414 system_pods.go:61] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:48:05.590364   22414 system_pods.go:61] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:48:05.590367   22414 system_pods.go:61] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:48:05.590370   22414 system_pods.go:61] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:48:05.590372   22414 system_pods.go:61] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:48:05.590376   22414 system_pods.go:61] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:48:05.590379   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:48:05.590382   22414 system_pods.go:61] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:48:05.590384   22414 system_pods.go:61] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:48:05.590387   22414 system_pods.go:61] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:48:05.590390   22414 system_pods.go:61] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:48:05.590396   22414 system_pods.go:61] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.590403   22414 system_pods.go:61] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.590408   22414 system_pods.go:61] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:48:05.590416   22414 system_pods.go:74] duration metric: took 178.969124ms to wait for pod list to return data ...
	I0313 23:48:05.590427   22414 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:48:05.778863   22414 request.go:629] Waited for 188.346037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:48:05.778945   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:48:05.778951   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.778958   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.778962   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.783286   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:05.783501   22414 default_sa.go:45] found service account: "default"
	I0313 23:48:05.783520   22414 default_sa.go:55] duration metric: took 193.086181ms for default service account to be created ...
	I0313 23:48:05.783531   22414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:48:05.979041   22414 request.go:629] Waited for 195.427717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.979166   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.979188   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.979198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.979205   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.985102   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:48:05.989345   22414 system_pods.go:86] 17 kube-system pods found
	I0313 23:48:05.989379   22414 system_pods.go:89] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:48:05.989388   22414 system_pods.go:89] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:48:05.989395   22414 system_pods.go:89] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:48:05.989402   22414 system_pods.go:89] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:48:05.989407   22414 system_pods.go:89] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:48:05.989413   22414 system_pods.go:89] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:48:05.989420   22414 system_pods.go:89] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:48:05.989430   22414 system_pods.go:89] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:48:05.989437   22414 system_pods.go:89] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:48:05.989450   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:48:05.989456   22414 system_pods.go:89] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:48:05.989465   22414 system_pods.go:89] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:48:05.989474   22414 system_pods.go:89] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:48:05.989483   22414 system_pods.go:89] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:48:05.989496   22414 system_pods.go:89] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.989507   22414 system_pods.go:89] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.989516   22414 system_pods.go:89] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:48:05.989524   22414 system_pods.go:126] duration metric: took 205.987083ms to wait for k8s-apps to be running ...
	I0313 23:48:05.989533   22414 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:48:05.989583   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:48:06.006976   22414 system_svc.go:56] duration metric: took 17.436264ms WaitForService to wait for kubelet
	I0313 23:48:06.007006   22414 kubeadm.go:576] duration metric: took 13.76935953s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:48:06.007036   22414 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:48:06.178382   22414 request.go:629] Waited for 171.274898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes
	I0313 23:48:06.178443   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes
	I0313 23:48:06.178448   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:06.178455   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:06.178461   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:06.182313   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:06.183249   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:48:06.183270   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:48:06.183284   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:48:06.183289   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:48:06.183294   22414 node_conditions.go:105] duration metric: took 176.25042ms to run NodePressure ...
	I0313 23:48:06.183307   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:48:06.183338   22414 start.go:254] writing updated cluster config ...
	I0313 23:48:06.185457   22414 out.go:177] 
	I0313 23:48:06.187324   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:06.187462   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:06.189637   22414 out.go:177] * Starting "ha-504633-m03" control-plane node in "ha-504633" cluster
	I0313 23:48:06.191396   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:48:06.191420   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:48:06.191518   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:48:06.191529   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:48:06.191626   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:06.191804   22414 start.go:360] acquireMachinesLock for ha-504633-m03: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:48:06.191845   22414 start.go:364] duration metric: took 22.662µs to acquireMachinesLock for "ha-504633-m03"
	I0313 23:48:06.191858   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:48:06.191972   22414 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0313 23:48:06.193917   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:48:06.193999   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:06.194032   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:06.208696   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0313 23:48:06.209197   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:06.209657   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:06.209682   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:06.210020   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:06.210225   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:06.210434   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:06.210629   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:48:06.210662   22414 client.go:168] LocalClient.Create starting
	I0313 23:48:06.210699   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:48:06.210746   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:48:06.210780   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:48:06.210839   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:48:06.210859   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:48:06.210871   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:48:06.210888   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:48:06.210895   22414 main.go:141] libmachine: (ha-504633-m03) Calling .PreCreateCheck
	I0313 23:48:06.211118   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:06.211520   22414 main.go:141] libmachine: Creating machine...
	I0313 23:48:06.211533   22414 main.go:141] libmachine: (ha-504633-m03) Calling .Create
	I0313 23:48:06.211662   22414 main.go:141] libmachine: (ha-504633-m03) Creating KVM machine...
	I0313 23:48:06.213229   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found existing default KVM network
	I0313 23:48:06.213321   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found existing private KVM network mk-ha-504633
	I0313 23:48:06.213492   22414 main.go:141] libmachine: (ha-504633-m03) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 ...
	I0313 23:48:06.213532   22414 main.go:141] libmachine: (ha-504633-m03) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:48:06.213634   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.213484   23288 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:48:06.213784   22414 main.go:141] libmachine: (ha-504633-m03) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:48:06.428369   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.428224   23288 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa...
	I0313 23:48:06.650011   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.649902   23288 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/ha-504633-m03.rawdisk...
	I0313 23:48:06.650044   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Writing magic tar header
	I0313 23:48:06.650058   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Writing SSH key tar header
	I0313 23:48:06.650149   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.650055   23288 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 ...
	I0313 23:48:06.650201   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03
	I0313 23:48:06.650213   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:48:06.650231   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 (perms=drwx------)
	I0313 23:48:06.650251   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:48:06.650271   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:48:06.650287   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:48:06.650301   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:48:06.650313   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:48:06.650322   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:48:06.650333   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:48:06.650346   22414 main.go:141] libmachine: (ha-504633-m03) Creating domain...
	I0313 23:48:06.650366   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:48:06.650374   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:48:06.650380   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home
	I0313 23:48:06.650386   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Skipping /home - not owner
	I0313 23:48:06.651437   22414 main.go:141] libmachine: (ha-504633-m03) define libvirt domain using xml: 
	I0313 23:48:06.651458   22414 main.go:141] libmachine: (ha-504633-m03) <domain type='kvm'>
	I0313 23:48:06.651469   22414 main.go:141] libmachine: (ha-504633-m03)   <name>ha-504633-m03</name>
	I0313 23:48:06.651477   22414 main.go:141] libmachine: (ha-504633-m03)   <memory unit='MiB'>2200</memory>
	I0313 23:48:06.651486   22414 main.go:141] libmachine: (ha-504633-m03)   <vcpu>2</vcpu>
	I0313 23:48:06.651497   22414 main.go:141] libmachine: (ha-504633-m03)   <features>
	I0313 23:48:06.651505   22414 main.go:141] libmachine: (ha-504633-m03)     <acpi/>
	I0313 23:48:06.651513   22414 main.go:141] libmachine: (ha-504633-m03)     <apic/>
	I0313 23:48:06.651518   22414 main.go:141] libmachine: (ha-504633-m03)     <pae/>
	I0313 23:48:06.651523   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.651529   22414 main.go:141] libmachine: (ha-504633-m03)   </features>
	I0313 23:48:06.651540   22414 main.go:141] libmachine: (ha-504633-m03)   <cpu mode='host-passthrough'>
	I0313 23:48:06.651559   22414 main.go:141] libmachine: (ha-504633-m03)   
	I0313 23:48:06.651582   22414 main.go:141] libmachine: (ha-504633-m03)   </cpu>
	I0313 23:48:06.651591   22414 main.go:141] libmachine: (ha-504633-m03)   <os>
	I0313 23:48:06.651598   22414 main.go:141] libmachine: (ha-504633-m03)     <type>hvm</type>
	I0313 23:48:06.651607   22414 main.go:141] libmachine: (ha-504633-m03)     <boot dev='cdrom'/>
	I0313 23:48:06.651612   22414 main.go:141] libmachine: (ha-504633-m03)     <boot dev='hd'/>
	I0313 23:48:06.651621   22414 main.go:141] libmachine: (ha-504633-m03)     <bootmenu enable='no'/>
	I0313 23:48:06.651625   22414 main.go:141] libmachine: (ha-504633-m03)   </os>
	I0313 23:48:06.651630   22414 main.go:141] libmachine: (ha-504633-m03)   <devices>
	I0313 23:48:06.651638   22414 main.go:141] libmachine: (ha-504633-m03)     <disk type='file' device='cdrom'>
	I0313 23:48:06.651676   22414 main.go:141] libmachine: (ha-504633-m03)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/boot2docker.iso'/>
	I0313 23:48:06.651710   22414 main.go:141] libmachine: (ha-504633-m03)       <target dev='hdc' bus='scsi'/>
	I0313 23:48:06.651723   22414 main.go:141] libmachine: (ha-504633-m03)       <readonly/>
	I0313 23:48:06.651734   22414 main.go:141] libmachine: (ha-504633-m03)     </disk>
	I0313 23:48:06.651748   22414 main.go:141] libmachine: (ha-504633-m03)     <disk type='file' device='disk'>
	I0313 23:48:06.651761   22414 main.go:141] libmachine: (ha-504633-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:48:06.651778   22414 main.go:141] libmachine: (ha-504633-m03)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/ha-504633-m03.rawdisk'/>
	I0313 23:48:06.651806   22414 main.go:141] libmachine: (ha-504633-m03)       <target dev='hda' bus='virtio'/>
	I0313 23:48:06.651818   22414 main.go:141] libmachine: (ha-504633-m03)     </disk>
	I0313 23:48:06.651829   22414 main.go:141] libmachine: (ha-504633-m03)     <interface type='network'>
	I0313 23:48:06.651841   22414 main.go:141] libmachine: (ha-504633-m03)       <source network='mk-ha-504633'/>
	I0313 23:48:06.651852   22414 main.go:141] libmachine: (ha-504633-m03)       <model type='virtio'/>
	I0313 23:48:06.651862   22414 main.go:141] libmachine: (ha-504633-m03)     </interface>
	I0313 23:48:06.651877   22414 main.go:141] libmachine: (ha-504633-m03)     <interface type='network'>
	I0313 23:48:06.651886   22414 main.go:141] libmachine: (ha-504633-m03)       <source network='default'/>
	I0313 23:48:06.651894   22414 main.go:141] libmachine: (ha-504633-m03)       <model type='virtio'/>
	I0313 23:48:06.651904   22414 main.go:141] libmachine: (ha-504633-m03)     </interface>
	I0313 23:48:06.651909   22414 main.go:141] libmachine: (ha-504633-m03)     <serial type='pty'>
	I0313 23:48:06.651917   22414 main.go:141] libmachine: (ha-504633-m03)       <target port='0'/>
	I0313 23:48:06.651922   22414 main.go:141] libmachine: (ha-504633-m03)     </serial>
	I0313 23:48:06.651930   22414 main.go:141] libmachine: (ha-504633-m03)     <console type='pty'>
	I0313 23:48:06.651941   22414 main.go:141] libmachine: (ha-504633-m03)       <target type='serial' port='0'/>
	I0313 23:48:06.651960   22414 main.go:141] libmachine: (ha-504633-m03)     </console>
	I0313 23:48:06.651973   22414 main.go:141] libmachine: (ha-504633-m03)     <rng model='virtio'>
	I0313 23:48:06.651984   22414 main.go:141] libmachine: (ha-504633-m03)       <backend model='random'>/dev/random</backend>
	I0313 23:48:06.651994   22414 main.go:141] libmachine: (ha-504633-m03)     </rng>
	I0313 23:48:06.652001   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.652011   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.652017   22414 main.go:141] libmachine: (ha-504633-m03)   </devices>
	I0313 23:48:06.652023   22414 main.go:141] libmachine: (ha-504633-m03) </domain>
	I0313 23:48:06.652030   22414 main.go:141] libmachine: (ha-504633-m03) 
	I0313 23:48:06.660477   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:39:86:a2 in network default
	I0313 23:48:06.661268   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring networks are active...
	I0313 23:48:06.661289   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:06.662120   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring network default is active
	I0313 23:48:06.662585   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring network mk-ha-504633 is active
	I0313 23:48:06.663022   22414 main.go:141] libmachine: (ha-504633-m03) Getting domain xml...
	I0313 23:48:06.663865   22414 main.go:141] libmachine: (ha-504633-m03) Creating domain...
	I0313 23:48:07.899152   22414 main.go:141] libmachine: (ha-504633-m03) Waiting to get IP...
	I0313 23:48:07.900091   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:07.900537   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:07.900579   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:07.900503   23288 retry.go:31] will retry after 279.429776ms: waiting for machine to come up
	I0313 23:48:08.182127   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.182510   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.182539   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.182475   23288 retry.go:31] will retry after 280.916957ms: waiting for machine to come up
	I0313 23:48:08.464904   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.465438   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.465465   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.465381   23288 retry.go:31] will retry after 355.252581ms: waiting for machine to come up
	I0313 23:48:08.822123   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.822598   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.822625   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.822548   23288 retry.go:31] will retry after 578.530778ms: waiting for machine to come up
	I0313 23:48:09.402293   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:09.402759   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:09.402809   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:09.402706   23288 retry.go:31] will retry after 626.205833ms: waiting for machine to come up
	I0313 23:48:10.030354   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:10.030847   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:10.030875   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:10.030777   23288 retry.go:31] will retry after 661.699082ms: waiting for machine to come up
	I0313 23:48:10.694180   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:10.694639   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:10.694660   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:10.694599   23288 retry.go:31] will retry after 1.125196766s: waiting for machine to come up
	I0313 23:48:11.821217   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:11.821725   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:11.821747   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:11.821691   23288 retry.go:31] will retry after 1.11519518s: waiting for machine to come up
	I0313 23:48:12.939126   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:12.939562   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:12.939579   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:12.939541   23288 retry.go:31] will retry after 1.82498896s: waiting for machine to come up
	I0313 23:48:14.766124   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:14.766589   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:14.766645   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:14.766569   23288 retry.go:31] will retry after 2.004419745s: waiting for machine to come up
	I0313 23:48:16.772997   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:16.773447   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:16.773473   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:16.773411   23288 retry.go:31] will retry after 2.159705549s: waiting for machine to come up
	I0313 23:48:18.935766   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:18.936247   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:18.936272   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:18.936208   23288 retry.go:31] will retry after 3.427169274s: waiting for machine to come up
	I0313 23:48:22.364471   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:22.364909   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:22.364934   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:22.364874   23288 retry.go:31] will retry after 3.920707034s: waiting for machine to come up
	I0313 23:48:26.287337   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:26.287749   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:26.287775   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:26.287693   23288 retry.go:31] will retry after 4.612548047s: waiting for machine to come up
	I0313 23:48:30.905349   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.905815   22414 main.go:141] libmachine: (ha-504633-m03) Found IP for machine: 192.168.39.156
	I0313 23:48:30.905841   22414 main.go:141] libmachine: (ha-504633-m03) Reserving static IP address...
	I0313 23:48:30.905854   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has current primary IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.906225   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find host DHCP lease matching {name: "ha-504633-m03", mac: "52:54:00:94:1d:f9", ip: "192.168.39.156"} in network mk-ha-504633
	I0313 23:48:30.977479   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Getting to WaitForSSH function...
	I0313 23:48:30.977510   22414 main.go:141] libmachine: (ha-504633-m03) Reserved static IP address: 192.168.39.156
	I0313 23:48:30.977524   22414 main.go:141] libmachine: (ha-504633-m03) Waiting for SSH to be available...
	I0313 23:48:30.980054   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.980415   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633
	I0313 23:48:30.980444   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find defined IP address of network mk-ha-504633 interface with MAC address 52:54:00:94:1d:f9
	I0313 23:48:30.980645   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH client type: external
	I0313 23:48:30.980672   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa (-rw-------)
	I0313 23:48:30.980704   22414 main.go:141] libmachine: (ha-504633-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:48:30.980722   22414 main.go:141] libmachine: (ha-504633-m03) DBG | About to run SSH command:
	I0313 23:48:30.980738   22414 main.go:141] libmachine: (ha-504633-m03) DBG | exit 0
	I0313 23:48:30.984225   22414 main.go:141] libmachine: (ha-504633-m03) DBG | SSH cmd err, output: exit status 255: 
	I0313 23:48:30.984243   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0313 23:48:30.984250   22414 main.go:141] libmachine: (ha-504633-m03) DBG | command : exit 0
	I0313 23:48:30.984256   22414 main.go:141] libmachine: (ha-504633-m03) DBG | err     : exit status 255
	I0313 23:48:30.984263   22414 main.go:141] libmachine: (ha-504633-m03) DBG | output  : 
	I0313 23:48:33.986686   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Getting to WaitForSSH function...
	I0313 23:48:33.988995   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:33.989367   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:33.989403   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:33.989468   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH client type: external
	I0313 23:48:33.989491   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa (-rw-------)
	I0313 23:48:33.989530   22414 main.go:141] libmachine: (ha-504633-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:48:33.989544   22414 main.go:141] libmachine: (ha-504633-m03) DBG | About to run SSH command:
	I0313 23:48:33.989570   22414 main.go:141] libmachine: (ha-504633-m03) DBG | exit 0
	I0313 23:48:34.110725   22414 main.go:141] libmachine: (ha-504633-m03) DBG | SSH cmd err, output: <nil>: 
	I0313 23:48:34.110984   22414 main.go:141] libmachine: (ha-504633-m03) KVM machine creation complete!
	I0313 23:48:34.111290   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:34.111849   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:34.112070   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:34.112307   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:48:34.112326   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:48:34.113582   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:48:34.113600   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:48:34.113607   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:48:34.113620   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.116063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.116433   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.116458   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.116615   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.116779   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.116936   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.117079   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.117246   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.117476   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.117488   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:48:34.218175   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:48:34.218198   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:48:34.218205   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.221129   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.221446   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.221511   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.221654   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.221904   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.222101   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.222250   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.222398   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.222579   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.222612   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:48:34.323667   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:48:34.323723   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:48:34.323730   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:48:34.323737   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.324017   22414 buildroot.go:166] provisioning hostname "ha-504633-m03"
	I0313 23:48:34.324049   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.324258   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.327094   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.327541   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.327569   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.327681   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.327866   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.327985   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.328128   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.328253   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.328402   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.328414   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633-m03 && echo "ha-504633-m03" | sudo tee /etc/hostname
	I0313 23:48:34.442416   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633-m03
	
	I0313 23:48:34.442441   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.445489   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.445976   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.446007   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.446179   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.446435   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.446629   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.446806   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.446969   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.447153   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.447177   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:48:34.556883   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:48:34.556914   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:48:34.556933   22414 buildroot.go:174] setting up certificates
	I0313 23:48:34.556946   22414 provision.go:84] configureAuth start
	I0313 23:48:34.556963   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.557273   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:34.559957   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.560418   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.560447   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.560666   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.563247   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.563586   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.563609   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.563781   22414 provision.go:143] copyHostCerts
	I0313 23:48:34.563810   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:48:34.563847   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:48:34.563858   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:48:34.563925   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:48:34.563994   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:48:34.564011   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:48:34.564017   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:48:34.564045   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:48:34.564086   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:48:34.564102   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:48:34.564108   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:48:34.564127   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:48:34.564173   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633-m03 san=[127.0.0.1 192.168.39.156 ha-504633-m03 localhost minikube]
	I0313 23:48:34.695002   22414 provision.go:177] copyRemoteCerts
	I0313 23:48:34.695054   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:48:34.695074   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.697643   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.698030   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.698057   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.698237   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.698424   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.698626   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.698817   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:34.783808   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:48:34.783882   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:48:34.814591   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:48:34.814657   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:48:34.844611   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:48:34.844686   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0313 23:48:34.871720   22414 provision.go:87] duration metric: took 314.757689ms to configureAuth
	I0313 23:48:34.871745   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:48:34.872007   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:34.872103   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.874669   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.875068   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.875097   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.875342   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.875517   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.875648   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.875751   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.875899   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.876092   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.876115   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:48:35.140993   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:48:35.141022   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:48:35.141039   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetURL
	I0313 23:48:35.142371   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using libvirt version 6000000
	I0313 23:48:35.144667   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.145063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.145091   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.145250   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:48:35.145262   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:48:35.145268   22414 client.go:171] duration metric: took 28.934599353s to LocalClient.Create
	I0313 23:48:35.145294   22414 start.go:167] duration metric: took 28.934664266s to libmachine.API.Create "ha-504633"
	I0313 23:48:35.145307   22414 start.go:293] postStartSetup for "ha-504633-m03" (driver="kvm2")
	I0313 23:48:35.145321   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:48:35.145337   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.145561   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:48:35.145620   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.147933   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.148269   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.148292   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.148437   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.148631   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.148815   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.148976   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.230518   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:48:35.235076   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:48:35.235107   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:48:35.235173   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:48:35.235273   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:48:35.235286   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:48:35.235390   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:48:35.246856   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:48:35.272754   22414 start.go:296] duration metric: took 127.430693ms for postStartSetup
	I0313 23:48:35.272817   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:35.273395   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:35.276063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.276434   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.276466   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.276817   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:35.277013   22414 start.go:128] duration metric: took 29.085030265s to createHost
	I0313 23:48:35.277035   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.279688   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.280086   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.280115   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.280307   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.280544   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.280732   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.280910   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.281091   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:35.281314   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:35.281329   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:48:35.383994   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373715.361768120
	
	I0313 23:48:35.384023   22414 fix.go:216] guest clock: 1710373715.361768120
	I0313 23:48:35.384035   22414 fix.go:229] Guest: 2024-03-13 23:48:35.36176812 +0000 UTC Remote: 2024-03-13 23:48:35.277024662 +0000 UTC m=+243.199508230 (delta=84.743458ms)
	I0313 23:48:35.384056   22414 fix.go:200] guest clock delta is within tolerance: 84.743458ms
	I0313 23:48:35.384064   22414 start.go:83] releasing machines lock for "ha-504633-m03", held for 29.192212918s
	I0313 23:48:35.384118   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.384400   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:35.386936   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.387364   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.387390   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.389774   22414 out.go:177] * Found network options:
	I0313 23:48:35.391527   22414 out.go:177]   - NO_PROXY=192.168.39.31,192.168.39.47
	W0313 23:48:35.393085   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	W0313 23:48:35.393107   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:48:35.393123   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.393768   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.393962   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.394068   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:48:35.394117   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	W0313 23:48:35.394195   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	W0313 23:48:35.394222   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:48:35.394290   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:48:35.394315   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.397114   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397367   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397523   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.397553   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397705   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.397835   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.397862   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.398013   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.398051   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.398151   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.398197   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.398312   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.398346   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.398477   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.637929   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:48:35.644363   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:48:35.644422   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:48:35.661140   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:48:35.661163   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:48:35.661232   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:48:35.679366   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:48:35.694561   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:48:35.694624   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:48:35.709117   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:48:35.723163   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:48:35.842898   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:48:35.991544   22414 docker.go:233] disabling docker service ...
	I0313 23:48:35.991629   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:48:36.009122   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:48:36.024083   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:48:36.165785   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:48:36.316911   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:48:36.332008   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:48:36.353156   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:48:36.353221   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.364075   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:48:36.364132   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.374950   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.385632   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.396708   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:48:36.408619   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:48:36.420158   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:48:36.420219   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:48:36.436036   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:48:36.447006   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:48:36.580531   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:48:36.725522   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:48:36.725596   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:48:36.731189   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:48:36.731246   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:48:36.735480   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:48:36.778545   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:48:36.778639   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:48:36.811946   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:48:36.848008   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:48:36.849251   22414 out.go:177]   - env NO_PROXY=192.168.39.31
	I0313 23:48:36.850377   22414 out.go:177]   - env NO_PROXY=192.168.39.31,192.168.39.47
	I0313 23:48:36.851374   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:36.853713   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:36.854031   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:36.854053   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:36.854252   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:48:36.858843   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:48:36.872293   22414 mustload.go:65] Loading cluster: ha-504633
	I0313 23:48:36.872560   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:36.872819   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:36.872857   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:36.888475   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0313 23:48:36.888949   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:36.889419   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:36.889439   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:36.889739   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:36.889931   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:48:36.891566   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:48:36.891854   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:36.891896   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:36.906024   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0313 23:48:36.906476   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:36.906898   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:36.906919   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:36.907234   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:36.907397   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:48:36.907559   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.156
	I0313 23:48:36.907571   22414 certs.go:194] generating shared ca certs ...
	I0313 23:48:36.907586   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:36.907699   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:48:36.907733   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:48:36.907742   22414 certs.go:256] generating profile certs ...
	I0313 23:48:36.907805   22414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:48:36.907828   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a
	I0313 23:48:36.907853   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.156 192.168.39.254]
	I0313 23:48:37.191402   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a ...
	I0313 23:48:37.191437   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a: {Name:mk01aec37fad9eb342e8f4115b2ff616d738d56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:37.191616   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a ...
	I0313 23:48:37.191635   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a: {Name:mkfba142dfa49e6dea2431f00b6486fa1ca09722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:37.191731   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:48:37.191892   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:48:37.192087   22414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:48:37.192109   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:48:37.192127   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:48:37.192141   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:48:37.192158   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:48:37.192172   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:48:37.192185   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:48:37.192197   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:48:37.192206   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:48:37.192259   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:48:37.192288   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:48:37.192299   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:48:37.192320   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:48:37.192343   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:48:37.192365   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:48:37.192400   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:48:37.192430   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.192444   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.192456   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.192485   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:48:37.195532   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:37.195944   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:48:37.195973   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:37.196102   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:48:37.196252   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:48:37.196368   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:48:37.196468   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:48:37.275190   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0313 23:48:37.281593   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0313 23:48:37.304527   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0313 23:48:37.311458   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0313 23:48:37.322212   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0313 23:48:37.327637   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0313 23:48:37.338878   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0313 23:48:37.344214   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0313 23:48:37.356587   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0313 23:48:37.361373   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0313 23:48:37.375940   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0313 23:48:37.382277   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0313 23:48:37.395261   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:48:37.425031   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:48:37.452505   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:48:37.480332   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:48:37.522097   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0313 23:48:37.550670   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:48:37.576952   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:48:37.605324   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:48:37.633147   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:48:37.658539   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:48:37.683676   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:48:37.708558   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0313 23:48:37.725717   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0313 23:48:37.742461   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0313 23:48:37.759789   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0313 23:48:37.776921   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0313 23:48:37.794144   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0313 23:48:37.812232   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0313 23:48:37.829750   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:48:37.835395   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:48:37.846020   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.850417   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.850461   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.856309   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:48:37.866963   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:48:37.877363   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.881844   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.881885   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.887483   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:48:37.897775   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:48:37.908109   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.912502   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.912537   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.918049   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:48:37.929065   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:48:37.933117   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:48:37.933160   22414 kubeadm.go:928] updating node {m03 192.168.39.156 8443 v1.28.4 crio true true} ...
	I0313 23:48:37.933230   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:48:37.933253   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:48:37.933278   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:48:37.933311   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:48:37.942979   22414 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0313 23:48:37.943028   22414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0313 23:48:37.952766   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0313 23:48:37.952791   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:48:37.952809   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0313 23:48:37.952852   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0313 23:48:37.952860   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:48:37.952871   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:48:37.952856   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:48:37.952930   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:48:37.965815   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0313 23:48:37.965843   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0313 23:48:37.965858   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0313 23:48:37.965883   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0313 23:48:38.001795   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:48:38.001893   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:48:38.119684   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0313 23:48:38.119724   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0313 23:48:38.987934   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0313 23:48:38.998590   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0313 23:48:39.016730   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:48:39.034203   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:48:39.051852   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:48:39.056306   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:48:39.070636   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:48:39.197836   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:48:39.217277   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:48:39.217775   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:39.217830   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:39.232885   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0313 23:48:39.233280   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:39.233770   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:39.233790   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:39.234162   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:39.234411   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:48:39.234582   22414 start.go:316] joinCluster: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:48:39.234739   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0313 23:48:39.234753   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:48:39.237855   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:39.238365   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:48:39.238390   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:39.238567   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:48:39.238747   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:48:39.238911   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:48:39.239058   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:48:39.401171   22414 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:48:39.401218   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8vtd06.300gcezfxmd801mh --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m03 --control-plane --apiserver-advertise-address=192.168.39.156 --apiserver-bind-port=8443"
	I0313 23:49:05.595949   22414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8vtd06.300gcezfxmd801mh --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m03 --control-plane --apiserver-advertise-address=192.168.39.156 --apiserver-bind-port=8443": (26.194704025s)
	I0313 23:49:05.596019   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0313 23:49:06.122415   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633-m03 minikube.k8s.io/updated_at=2024_03_13T23_49_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=false
	I0313 23:49:06.291089   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-504633-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0313 23:49:06.415982   22414 start.go:318] duration metric: took 27.181396251s to joinCluster
	I0313 23:49:06.416085   22414 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:49:06.417805   22414 out.go:177] * Verifying Kubernetes components...
	I0313 23:49:06.416449   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:49:06.419289   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:49:06.635370   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:49:06.655369   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:49:06.655707   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0313 23:49:06.655797   22414 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.31:8443
	I0313 23:49:06.656074   22414 node_ready.go:35] waiting up to 6m0s for node "ha-504633-m03" to be "Ready" ...
	I0313 23:49:06.656156   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:06.656167   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:06.656177   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:06.656183   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:06.660365   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:07.157146   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:07.157177   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:07.157185   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:07.157194   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:07.161587   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:07.656405   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:07.656426   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:07.656434   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:07.656438   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:07.660558   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:08.156312   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:08.156332   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:08.156340   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:08.156343   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:08.160269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:08.656612   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:08.656633   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:08.656644   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:08.656647   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:08.660190   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:08.660981   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:09.157309   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:09.157337   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:09.157347   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:09.157354   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:09.161762   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:09.656718   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:09.656744   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:09.656755   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:09.656760   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:09.662709   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:10.157200   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:10.157224   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:10.157232   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:10.157236   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:10.160892   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:10.656443   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:10.656465   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:10.656476   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:10.656492   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:10.660269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:11.156342   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:11.156367   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:11.156379   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:11.156384   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:11.160336   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:11.161137   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:11.657012   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:11.657031   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:11.657039   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:11.657043   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:11.660636   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:12.156637   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:12.156659   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:12.156666   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:12.156670   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:12.160269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:12.657191   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:12.657212   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:12.657222   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:12.657227   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:12.660950   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:13.156718   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:13.156752   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:13.156764   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:13.156769   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:13.161388   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:13.161962   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:13.657047   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:13.657068   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:13.657076   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:13.657080   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:13.660958   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:14.156305   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:14.156327   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:14.156337   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:14.156343   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:14.159935   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:14.656968   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:14.656989   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:14.656997   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:14.657002   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:14.660792   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.156728   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:15.156749   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:15.156756   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:15.156761   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:15.160574   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.657193   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:15.657235   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:15.657263   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:15.657269   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:15.661236   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.661987   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:16.156258   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:16.156281   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:16.156292   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:16.156296   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:16.160038   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:16.656366   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:16.656389   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:16.656400   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:16.656406   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:16.661256   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:17.156640   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:17.156672   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:17.156681   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:17.156685   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:17.160708   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:17.656538   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:17.656561   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:17.656573   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:17.656578   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:17.662541   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:17.663303   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:18.156579   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:18.156601   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:18.156609   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:18.156614   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:18.160511   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:18.656359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:18.656382   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:18.656390   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:18.656394   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:18.660023   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:19.156747   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:19.156771   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:19.156780   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:19.156783   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:19.160504   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:19.656225   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:19.656251   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:19.656264   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:19.656270   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:19.660221   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:20.156798   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:20.156819   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:20.156831   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:20.156842   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:20.160836   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:20.161707   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:20.657073   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:20.657093   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:20.657102   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:20.657105   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:20.661497   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:21.156949   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:21.156982   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:21.156993   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:21.156999   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:21.160870   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:21.656450   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:21.656471   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:21.656479   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:21.656483   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:21.660293   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:22.157034   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:22.157062   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:22.157073   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:22.157079   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:22.161438   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:22.162094   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:22.656936   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:22.656956   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:22.656965   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:22.656969   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:22.668835   22414 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0313 23:49:23.156892   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:23.156914   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:23.156921   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:23.156924   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:23.161669   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:23.656492   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:23.656512   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:23.656520   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:23.656524   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:23.660224   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.156249   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:24.156269   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:24.156277   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:24.156282   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:24.160177   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.656890   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:24.656911   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:24.656922   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:24.656927   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:24.660744   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.661688   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:25.157161   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:25.157187   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:25.157198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:25.157202   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:25.160839   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:25.657186   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:25.657206   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:25.657214   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:25.657217   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:25.660681   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:26.156626   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:26.156648   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:26.156657   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:26.156662   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:26.160565   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:26.656997   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:26.657023   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:26.657034   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:26.657043   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:26.660542   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:27.157103   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:27.157132   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:27.157143   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:27.157147   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:27.161748   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:27.162947   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:27.656442   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:27.656461   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:27.656469   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:27.656474   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:27.659981   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:28.156400   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:28.156422   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:28.156429   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:28.156433   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:28.159900   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:28.657085   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:28.657118   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:28.657128   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:28.657134   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:28.660279   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:29.156397   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:29.156432   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:29.156442   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:29.156447   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:29.160531   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:29.656277   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:29.656326   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:29.656336   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:29.656344   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:29.659912   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:29.660580   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:30.156616   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:30.156640   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:30.156650   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:30.156656   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:30.161718   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:30.656380   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:30.656416   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:30.656426   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:30.656434   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:30.659825   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:31.156362   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:31.156390   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:31.156399   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:31.156405   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:31.160760   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:31.657210   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:31.657235   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:31.657248   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:31.657255   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:31.661052   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:31.661725   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:32.156739   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:32.156765   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:32.156777   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:32.156783   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:32.160349   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:32.657026   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:32.657053   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:32.657066   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:32.657071   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:32.660625   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:33.156667   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:33.156689   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:33.156700   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:33.156705   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:33.160593   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:33.656835   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:33.656860   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:33.656874   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:33.656881   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:33.660429   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:34.156274   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:34.156295   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:34.156305   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:34.156310   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:34.159577   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:34.160118   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:34.656292   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:34.656311   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:34.656319   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:34.656323   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:34.660280   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:35.156408   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:35.156430   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:35.156440   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:35.156446   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:35.160043   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:35.656908   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:35.656935   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:35.656948   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:35.656952   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:35.660737   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:36.156633   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:36.156654   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:36.156662   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:36.156668   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:36.160175   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:36.160821   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:36.656664   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:36.656693   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:36.656705   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:36.656711   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:36.660393   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:37.156587   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:37.156614   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:37.156622   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:37.156626   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:37.160590   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:37.656458   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:37.656488   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:37.656500   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:37.656506   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:37.660153   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:38.157039   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:38.157062   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:38.157074   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:38.157079   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:38.161233   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:38.161913   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:38.656563   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:38.656583   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:38.656591   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:38.656595   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:38.660313   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:39.156359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:39.156382   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:39.156390   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:39.156394   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:39.160204   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:39.657222   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:39.657255   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:39.657263   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:39.657267   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:39.660891   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.157105   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:40.157124   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:40.157132   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:40.157137   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:40.160693   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.656981   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:40.657004   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:40.657013   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:40.657018   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:40.660588   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.661073   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:41.156480   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:41.156509   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:41.156520   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:41.156525   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:41.160366   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:41.656987   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:41.657009   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:41.657017   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:41.657020   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:41.660801   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.156847   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:42.156874   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.156886   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.156890   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.160900   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.657184   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:42.657205   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.657213   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.657218   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.660663   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.661220   22414 node_ready.go:49] node "ha-504633-m03" has status "Ready":"True"
	I0313 23:49:42.661238   22414 node_ready.go:38] duration metric: took 36.005140846s for node "ha-504633-m03" to be "Ready" ...
	I0313 23:49:42.661248   22414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:49:42.661315   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:42.661327   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.661335   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.661341   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.673305   22414 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0313 23:49:42.679704   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.679780   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dbkfv
	I0313 23:49:42.679787   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.679794   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.679805   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.683229   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.683953   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.683972   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.683983   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.683990   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.687009   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.687634   22414 pod_ready.go:92] pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.687656   22414 pod_ready.go:81] duration metric: took 7.928033ms for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.687667   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.687722   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hh2kw
	I0313 23:49:42.687735   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.687742   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.687747   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.690647   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.691458   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.691475   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.691481   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.691484   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.694308   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.695093   22414 pod_ready.go:92] pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.695110   22414 pod_ready.go:81] duration metric: took 7.429038ms for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.695118   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.695158   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633
	I0313 23:49:42.695166   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.695173   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.695175   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.697936   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.698439   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.698451   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.698458   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.698461   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.701290   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.701693   22414 pod_ready.go:92] pod "etcd-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.701709   22414 pod_ready.go:81] duration metric: took 6.585814ms for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.701717   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.701763   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:49:42.701771   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.701777   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.701781   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.705482   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.705966   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:42.705979   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.705986   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.705990   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.710405   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:42.711113   22414 pod_ready.go:92] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.711127   22414 pod_ready.go:81] duration metric: took 9.40481ms for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.711135   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.857547   22414 request.go:629] Waited for 146.335115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m03
	I0313 23:49:42.857614   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m03
	I0313 23:49:42.857623   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.857636   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.857644   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.861452   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.057319   22414 request.go:629] Waited for 195.291793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:43.057389   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:43.057394   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.057401   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.057404   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.062957   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:43.063923   22414 pod_ready.go:92] pod "etcd-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.063948   22414 pod_ready.go:81] duration metric: took 352.806196ms for pod "etcd-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.063973   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.258198   22414 request.go:629] Waited for 194.156539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:49:43.258250   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:49:43.258255   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.258262   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.258267   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.261920   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.457909   22414 request.go:629] Waited for 195.376655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:43.457974   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:43.457979   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.457986   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.457990   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.462063   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:43.462868   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.462893   22414 pod_ready.go:81] duration metric: took 398.910882ms for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.462905   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.658023   22414 request.go:629] Waited for 195.045771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:49:43.658096   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:49:43.658107   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.658117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.658123   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.661935   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.857960   22414 request.go:629] Waited for 195.371095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:43.858055   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:43.858068   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.858081   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.858088   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.861950   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.862576   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.862599   22414 pod_ready.go:81] duration metric: took 399.683404ms for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.862611   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.057745   22414 request.go:629] Waited for 195.057927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m03
	I0313 23:49:44.057822   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m03
	I0313 23:49:44.057832   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.057841   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.057847   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.061771   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:44.257786   22414 request.go:629] Waited for 195.400984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:44.257843   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:44.257847   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.257855   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.257860   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.261973   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:44.262451   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:44.262470   22414 pod_ready.go:81] duration metric: took 399.850873ms for pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.262484   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.458161   22414 request.go:629] Waited for 195.594135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:49:44.458233   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:49:44.458244   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.458256   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.458262   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.462588   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:44.657528   22414 request.go:629] Waited for 194.387984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:44.657586   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:44.657592   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.657598   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.657603   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.661301   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:44.662096   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:44.662117   22414 pod_ready.go:81] duration metric: took 399.621338ms for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.662130   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.858095   22414 request.go:629] Waited for 195.896254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:49:44.858174   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:49:44.858201   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.858213   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.858218   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.864178   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:45.057218   22414 request.go:629] Waited for 192.330184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.057295   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.057302   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.057312   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.057325   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.060714   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.061326   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.061345   22414 pod_ready.go:81] duration metric: took 399.208021ms for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.061355   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.257479   22414 request.go:629] Waited for 196.049636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m03
	I0313 23:49:45.257530   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m03
	I0313 23:49:45.257535   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.257543   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.257546   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.261706   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:45.457710   22414 request.go:629] Waited for 195.37714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:45.457791   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:45.457797   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.457804   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.457809   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.461552   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.462172   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.462192   22414 pod_ready.go:81] duration metric: took 400.831073ms for pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.462201   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.657295   22414 request.go:629] Waited for 195.042177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:49:45.657352   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:49:45.657368   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.657375   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.657380   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.661842   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:45.857850   22414 request.go:629] Waited for 195.383513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.857913   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.857931   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.857943   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.857953   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.861846   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.862411   22414 pod_ready.go:92] pod "kube-proxy-4s9t5" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.862437   22414 pod_ready.go:81] duration metric: took 400.229023ms for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.862450   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgcxp" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.058225   22414 request.go:629] Waited for 195.708482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgcxp
	I0313 23:49:46.058279   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgcxp
	I0313 23:49:46.058284   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.058291   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.058295   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.062068   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.258172   22414 request.go:629] Waited for 195.400958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:46.258238   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:46.258249   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.258259   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.258270   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.261914   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.262333   22414 pod_ready.go:92] pod "kube-proxy-fgcxp" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:46.262351   22414 pod_ready.go:81] duration metric: took 399.893993ms for pod "kube-proxy-fgcxp" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.262360   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.457527   22414 request.go:629] Waited for 195.09857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:49:46.457596   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:49:46.457602   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.457609   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.457615   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.461871   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:46.657920   22414 request.go:629] Waited for 195.260373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:46.658013   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:46.658021   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.658032   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.658039   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.662009   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.662639   22414 pod_ready.go:92] pod "kube-proxy-j56zl" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:46.662664   22414 pod_ready.go:81] duration metric: took 400.294109ms for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.662676   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.857649   22414 request.go:629] Waited for 194.903331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:49:46.857721   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:49:46.857727   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.857737   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.857741   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.863018   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:47.058114   22414 request.go:629] Waited for 194.351431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:47.058173   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:47.058178   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.058186   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.058190   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.061891   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:47.062362   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.062379   22414 pod_ready.go:81] duration metric: took 399.695207ms for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.062389   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.257581   22414 request.go:629] Waited for 195.108154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:49:47.257632   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:49:47.257636   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.257644   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.257649   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.261972   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:47.457697   22414 request.go:629] Waited for 195.169907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:47.457764   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:47.457772   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.457783   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.457788   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.462134   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:47.463074   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.463094   22414 pod_ready.go:81] duration metric: took 400.698904ms for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.463106   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.658140   22414 request.go:629] Waited for 194.971007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m03
	I0313 23:49:47.658191   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m03
	I0313 23:49:47.658197   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.658204   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.658209   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.662107   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:47.857937   22414 request.go:629] Waited for 195.372026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:47.857993   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:47.858001   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.858010   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.858022   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.864046   22414 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0313 23:49:47.864566   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.864586   22414 pod_ready.go:81] duration metric: took 401.473601ms for pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.864607   22414 pod_ready.go:38] duration metric: took 5.203345886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:49:47.864632   22414 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:49:47.864693   22414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:49:47.884651   22414 api_server.go:72] duration metric: took 41.46852741s to wait for apiserver process to appear ...
	I0313 23:49:47.884684   22414 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:49:47.884705   22414 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I0313 23:49:47.891488   22414 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I0313 23:49:47.891571   22414 round_trippers.go:463] GET https://192.168.39.31:8443/version
	I0313 23:49:47.891583   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.891595   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.891608   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.892898   22414 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0313 23:49:47.893007   22414 api_server.go:141] control plane version: v1.28.4
	I0313 23:49:47.893032   22414 api_server.go:131] duration metric: took 8.340573ms to wait for apiserver health ...
	I0313 23:49:47.893040   22414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:49:48.057341   22414 request.go:629] Waited for 164.218413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.057408   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.057415   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.057431   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.057440   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.065455   22414 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0313 23:49:48.073154   22414 system_pods.go:59] 24 kube-system pods found
	I0313 23:49:48.073180   22414 system_pods.go:61] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:49:48.073184   22414 system_pods.go:61] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:49:48.073193   22414 system_pods.go:61] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:49:48.073196   22414 system_pods.go:61] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:49:48.073199   22414 system_pods.go:61] "etcd-ha-504633-m03" [b1230ab0-c989-4b3e-96c7-f1ea1b866285] Running
	I0313 23:49:48.073202   22414 system_pods.go:61] "kindnet-5gfqz" [d8daf9d8-d130-4a0a-bfc8-a38d276444e1] Running
	I0313 23:49:48.073205   22414 system_pods.go:61] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:49:48.073208   22414 system_pods.go:61] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:49:48.073211   22414 system_pods.go:61] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:49:48.073214   22414 system_pods.go:61] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:49:48.073217   22414 system_pods.go:61] "kube-apiserver-ha-504633-m03" [06b73358-0ea8-4b7e-b245-e3dea0a5a321] Running
	I0313 23:49:48.073220   22414 system_pods.go:61] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:49:48.073223   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:49:48.073226   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m03" [93b8e260-d800-43d1-9b09-d72d7791b9db] Running
	I0313 23:49:48.073228   22414 system_pods.go:61] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:49:48.073231   22414 system_pods.go:61] "kube-proxy-fgcxp" [7ef9b719-adf6-4d07-9d11-9df0b5e923a6] Running
	I0313 23:49:48.073234   22414 system_pods.go:61] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:49:48.073237   22414 system_pods.go:61] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:49:48.073242   22414 system_pods.go:61] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:49:48.073247   22414 system_pods.go:61] "kube-scheduler-ha-504633-m03" [de4d66e3-bec6-4dbd-ade8-d252b040ad68] Running
	I0313 23:49:48.073253   22414 system_pods.go:61] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073267   22414 system_pods.go:61] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073275   22414 system_pods.go:61] "kube-vip-ha-504633-m03" [3a6ecc18-b04d-43b3-bdc0-82b1f75b6a4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073279   22414 system_pods.go:61] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:49:48.073288   22414 system_pods.go:74] duration metric: took 180.240776ms to wait for pod list to return data ...
	I0313 23:49:48.073297   22414 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:49:48.257752   22414 request.go:629] Waited for 184.393715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:49:48.257806   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:49:48.257811   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.257818   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.257822   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.262100   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:48.262231   22414 default_sa.go:45] found service account: "default"
	I0313 23:49:48.262252   22414 default_sa.go:55] duration metric: took 188.948599ms for default service account to be created ...
	I0313 23:49:48.262262   22414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:49:48.457611   22414 request.go:629] Waited for 195.270655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.457681   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.457689   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.457700   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.457704   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.467177   22414 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0313 23:49:48.472914   22414 system_pods.go:86] 24 kube-system pods found
	I0313 23:49:48.472944   22414 system_pods.go:89] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:49:48.472949   22414 system_pods.go:89] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:49:48.472954   22414 system_pods.go:89] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:49:48.472958   22414 system_pods.go:89] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:49:48.472962   22414 system_pods.go:89] "etcd-ha-504633-m03" [b1230ab0-c989-4b3e-96c7-f1ea1b866285] Running
	I0313 23:49:48.472967   22414 system_pods.go:89] "kindnet-5gfqz" [d8daf9d8-d130-4a0a-bfc8-a38d276444e1] Running
	I0313 23:49:48.472970   22414 system_pods.go:89] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:49:48.472974   22414 system_pods.go:89] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:49:48.472979   22414 system_pods.go:89] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:49:48.472986   22414 system_pods.go:89] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:49:48.472992   22414 system_pods.go:89] "kube-apiserver-ha-504633-m03" [06b73358-0ea8-4b7e-b245-e3dea0a5a321] Running
	I0313 23:49:48.473003   22414 system_pods.go:89] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:49:48.473007   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:49:48.473011   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m03" [93b8e260-d800-43d1-9b09-d72d7791b9db] Running
	I0313 23:49:48.473015   22414 system_pods.go:89] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:49:48.473019   22414 system_pods.go:89] "kube-proxy-fgcxp" [7ef9b719-adf6-4d07-9d11-9df0b5e923a6] Running
	I0313 23:49:48.473023   22414 system_pods.go:89] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:49:48.473027   22414 system_pods.go:89] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:49:48.473033   22414 system_pods.go:89] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:49:48.473037   22414 system_pods.go:89] "kube-scheduler-ha-504633-m03" [de4d66e3-bec6-4dbd-ade8-d252b040ad68] Running
	I0313 23:49:48.473046   22414 system_pods.go:89] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473054   22414 system_pods.go:89] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473061   22414 system_pods.go:89] "kube-vip-ha-504633-m03" [3a6ecc18-b04d-43b3-bdc0-82b1f75b6a4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473067   22414 system_pods.go:89] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:49:48.473074   22414 system_pods.go:126] duration metric: took 210.806744ms to wait for k8s-apps to be running ...
	I0313 23:49:48.473083   22414 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:49:48.473125   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:49:48.489892   22414 system_svc.go:56] duration metric: took 16.801333ms WaitForService to wait for kubelet
	I0313 23:49:48.489925   22414 kubeadm.go:576] duration metric: took 42.073801943s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:49:48.489948   22414 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:49:48.657767   22414 request.go:629] Waited for 167.730049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes
	I0313 23:49:48.657818   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes
	I0313 23:49:48.657823   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.657831   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.657837   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.663597   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:48.664893   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664912   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664922   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664925   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664930   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664934   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664937   22414 node_conditions.go:105] duration metric: took 174.984846ms to run NodePressure ...
	I0313 23:49:48.664948   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:49:48.664969   22414 start.go:254] writing updated cluster config ...
	I0313 23:49:48.665215   22414 ssh_runner.go:195] Run: rm -f paused
	I0313 23:49:48.718671   22414 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0313 23:49:48.720821   22414 out.go:177] * Done! kubectl is now configured to use "ha-504633" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 13 23:53:26 ha-504633 crio[677]: time="2024-03-13 23:53:26.998161204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374006997926991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=996d767b-3452-44ee-898f-cbe77e0e3d6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:53:26 ha-504633 crio[677]: time="2024-03-13 23:53:26.998747293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3beb111d-18b0-4bc6-869e-415935b00727 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:26 ha-504633 crio[677]: time="2024-03-13 23:53:26.998821191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3beb111d-18b0-4bc6-869e-415935b00727 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:26 ha-504633 crio[677]: time="2024-03-13 23:53:26.999234754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3beb111d-18b0-4bc6-869e-415935b00727 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.027555044Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=28b46308-d771-4fd2-b005-24cfae1173e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.027917903Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-dx92g,Uid:e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373790958231932,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:49:49.740105671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0e57f625-8927-418c-bdf2-9022439f858c,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_READY,CreatedAt:1710373534680611303,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\
"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-13T23:45:34.362520373Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dbkfv,Uid:bb55bb86-7637-4571-af89-55b34361d46f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373534672529565,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.357252352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hh2kw,Uid:ac81d022-8c47-4f99-8a34-bb4f73ead561,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1710373534652530074,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.345384580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&PodSandboxMetadata{Name:kube-proxy-j56zl,Uid:9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373529406724107,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[stri
ng]string{kubernetes.io/config.seen: 2024-03-13T23:45:28.468943425Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&PodSandboxMetadata{Name:kindnet-8kvnb,Uid:b356234a-5293-417c-b78f-8d532dfe1bc1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373529391374054,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:28.470542470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-504633,Uid:00cdbdbd1a1d0aefa499a886ae738c0a,Namespace:kube-system,Attempt
:0,},State:SANDBOX_READY,CreatedAt:1710373509388654981,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.31:8443,kubernetes.io/config.hash: 00cdbdbd1a1d0aefa499a886ae738c0a,kubernetes.io/config.seen: 2024-03-13T23:45:08.911346492Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-504633,Uid:b9f7ed25c0cb42b2cf61135e6a1c245f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373509387120801,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0
cb42b2cf61135e6a1c245f,},Annotations:map[string]string{kubernetes.io/config.hash: b9f7ed25c0cb42b2cf61135e6a1c245f,kubernetes.io/config.seen: 2024-03-13T23:45:08.911349302Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&PodSandboxMetadata{Name:etcd-ha-504633,Uid:800b1d8694f42b67376c6e23b8dd8603,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373509385548633,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.31:2379,kubernetes.io/config.hash: 800b1d8694f42b67376c6e23b8dd8603,kubernetes.io/config.seen: 2024-03-13T23:45:08.911342636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5651d5d4cdf17422415
8e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-504633,Uid:c67e920ab8fd05e2d7c9a70920aeb5b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373509378898250,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c67e920ab8fd05e2d7c9a70920aeb5b4,kubernetes.io/config.seen: 2024-03-13T23:45:08.911348659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-504633,Uid:e8a4476828b7f0f0c95498e085ba5df9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710373509366802042,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e8a4476828b7f0f0c95498e085ba5df9,kubernetes.io/config.seen: 2024-03-13T23:45:08.911347696Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=28b46308-d771-4fd2-b005-24cfae1173e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.029040939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a32347-9839-4e47-ae4b-6e262da6b34c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.029120048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a32347-9839-4e47-ae4b-6e262da6b34c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.029449783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75a32347-9839-4e47-ae4b-6e262da6b34c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.049037576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=876e6f63-4a21-4244-b9b1-5f2ddbcb642c name=/runtime.v1.RuntimeService/Version
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.049129153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=876e6f63-4a21-4244-b9b1-5f2ddbcb642c name=/runtime.v1.RuntimeService/Version
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.050891580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48bf2bc3-abc6-4347-8678-c7022c1b33aa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.052094334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374007051960232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48bf2bc3-abc6-4347-8678-c7022c1b33aa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.052890033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc221adf-8417-4f77-bc7b-2efe0525c830 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.053019220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc221adf-8417-4f77-bc7b-2efe0525c830 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.053389174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc221adf-8417-4f77-bc7b-2efe0525c830 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.077722500Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=d0fb8524-4852-4036-ac3c-7838b6f1bd4d name=/runtime.v1.RuntimeService/Status
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.077815559Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d0fb8524-4852-4036-ac3c-7838b6f1bd4d name=/runtime.v1.RuntimeService/Status
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.093652391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3270e16a-d0d4-4d30-9518-29ce51cd9591 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.093747656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3270e16a-d0d4-4d30-9518-29ce51cd9591 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.095185655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c1e4736-5e9a-475e-808b-c9d0160df497 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.095710200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374007095677393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c1e4736-5e9a-475e-808b-c9d0160df497 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.096443225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8210c4a2-9f3e-4eb0-a01c-630b76c9ce95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.096614214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8210c4a2-9f3e-4eb0-a01c-630b76c9ce95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:53:27 ha-504633 crio[677]: time="2024-03-13 23:53:27.096918056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8210c4a2-9f3e-4eb0-a01c-630b76c9ce95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8cd8ab250ed1       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago       Exited              kube-vip                  7                   6664331d2d846       kube-vip-ha-504633
	3e670be31d057       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   44694d6d0ddb1       busybox-5b5d89c9d6-dx92g
	aadb470eed29b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   f7efac86eab07       storage-provisioner
	d6d0cf88a442b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   f7efac86eab07       storage-provisioner
	91c5fdb6071ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   ac06f7523df34       coredns-5dd5756b68-dbkfv
	cea68e46e7574       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   99eec3703a3ac       coredns-5dd5756b68-hh2kw
	b87585aab2e4e       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   735a5bdd8eef7       kindnet-8kvnb
	ce0dc1e514cfe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   508491d3a970a       kube-proxy-j56zl
	ec04eb9f36ad1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      8 minutes ago       Running             etcd                      0                   2e892e8826932       etcd-ha-504633
	03595624eed74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      8 minutes ago       Running             kube-scheduler            0                   e5651d5d4cdf1       kube-scheduler-ha-504633
	f760286dfea8a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      8 minutes ago       Running             kube-controller-manager   0                   fd22a4b33ad9b       kube-controller-manager-ha-504633
	581070edea465       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      8 minutes ago       Running             kube-apiserver            0                   6ae040a8c89c0       kube-apiserver-ha-504633
	
	
	==> coredns [91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025] <==
	[INFO] 10.244.0.4:34622 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180157s
	[INFO] 10.244.0.4:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010268s
	[INFO] 10.244.0.4:45464 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106179s
	[INFO] 10.244.2.2:37253 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138314s
	[INFO] 10.244.2.2:37661 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945874s
	[INFO] 10.244.2.2:45263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00157594s
	[INFO] 10.244.2.2:56184 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095082s
	[INFO] 10.244.2.2:38062 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145314s
	[INFO] 10.244.2.2:47535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099682s
	[INFO] 10.244.1.2:38146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248518s
	[INFO] 10.244.1.2:54521 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160289s
	[INFO] 10.244.1.2:34985 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396473s
	[INFO] 10.244.1.2:37504 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127175s
	[INFO] 10.244.1.2:47786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089644s
	[INFO] 10.244.0.4:42865 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167315s
	[INFO] 10.244.2.2:37374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167385s
	[INFO] 10.244.2.2:33251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009522s
	[INFO] 10.244.1.2:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158704s
	[INFO] 10.244.1.2:36398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143215s
	[INFO] 10.244.1.2:60528 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012073s
	[INFO] 10.244.1.2:45057 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013653s
	[INFO] 10.244.0.4:55605 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153423s
	[INFO] 10.244.1.2:37595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218212s
	[INFO] 10.244.1.2:45054 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155156s
	[INFO] 10.244.1.2:45734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159775s
	
	
	==> coredns [cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d] <==
	[INFO] 127.0.0.1:60482 - 8231 "HINFO IN 4188345321067739738.1742461500624588533. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008155088s
	[INFO] 10.244.2.2:60205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002156886s
	[INFO] 10.244.1.2:53349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212493s
	[INFO] 10.244.1.2:36980 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001402731s
	[INFO] 10.244.0.4:41863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106374s
	[INFO] 10.244.0.4:36734 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153111s
	[INFO] 10.244.0.4:36918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002576888s
	[INFO] 10.244.2.2:52506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216481s
	[INFO] 10.244.2.2:41181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142291s
	[INFO] 10.244.1.2:41560 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185807s
	[INFO] 10.244.1.2:34843 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104567s
	[INFO] 10.244.1.2:36490 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226318s
	[INFO] 10.244.0.4:60091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107953s
	[INFO] 10.244.0.4:37327 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151724s
	[INFO] 10.244.0.4:35399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043972s
	[INFO] 10.244.2.2:59809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090745s
	[INFO] 10.244.2.2:40239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069623s
	[INFO] 10.244.0.4:36867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127937s
	[INFO] 10.244.0.4:35854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195121s
	[INFO] 10.244.0.4:56742 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109765s
	[INFO] 10.244.2.2:33696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132875s
	[INFO] 10.244.2.2:51474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149174s
	[INFO] 10.244.2.2:58642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010185s
	[INFO] 10.244.2.2:58203 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089769s
	[INFO] 10.244.1.2:54587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118471s
	
	
	==> describe nodes <==
	Name:               ha-504633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    ha-504633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13fd8f4b90794ddf8d3d6bdb9051c529
	  System UUID:                13fd8f4b-9079-4ddf-8d3d-6bdb9051c529
	  Boot ID:                    83daf814-565c-4717-8930-43f7c53558eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dx92g             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-5dd5756b68-dbkfv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 coredns-5dd5756b68-hh2kw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 etcd-ha-504633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m10s
	  kube-system                 kindnet-8kvnb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m59s
	  kube-system                 kube-apiserver-ha-504633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-controller-manager-ha-504633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-proxy-j56zl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-scheduler-ha-504633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-vip-ha-504633                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m57s                  kube-proxy       
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m18s (x7 over 8m19s)  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m18s (x8 over 8m19s)  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s (x8 over 8m19s)  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m11s                  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m                     node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal  NodeReady                7m53s                  kubelet          Node ha-504633 status is now: NodeReady
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	
	
	Name:               ha-504633-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:47:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:51:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-504633-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6ba1a02ba14580ac16771f2b426854
	  System UUID:                5f6ba1a0-2ba1-4580-ac16-771f2b426854
	  Boot ID:                    d6e314b0-19ea-491a-ae7d-e96708f9fad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zfjjt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-504633-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m51s
	  kube-system                 kindnet-f4pz8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m51s
	  kube-system                 kube-apiserver-ha-504633-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-controller-manager-ha-504633-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-proxy-4s9t5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-scheduler-ha-504633-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kube-system                 kube-vip-ha-504633-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m35s  kube-proxy       
	  Normal  RegisteredNode  5m50s  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode  5m22s  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode  4m7s   node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  NodeNotReady    104s   node-controller  Node ha-504633-m02 status is now: NodeNotReady
	
	
	Name:               ha-504633-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_49_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:49:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:53:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-504633-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8771be2dfdfd44f18d592fcb20bb5a4c
	  System UUID:                8771be2d-fdfd-44f1-8d59-2fcb20bb5a4c
	  Boot ID:                    72dccc4c-7d49-4586-a425-779d86f055c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-prmkb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-504633-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kindnet-5gfqz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m24s
	  kube-system                 kube-apiserver-ha-504633-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-controller-manager-ha-504633-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-fgcxp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-scheduler-ha-504633-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-vip-ha-504633-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m11s  kube-proxy       
	  Normal  RegisteredNode  4m22s  node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal  RegisteredNode  4m20s  node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal  RegisteredNode  4m7s   node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	
	
	Name:               ha-504633-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-504633-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d985b67edcea4528bf49bb9fe5eeb65e
	  System UUID:                d985b67e-dcea-4528-bf49-bb9fe5eeb65e
	  Boot ID:                    e84e96f1-dcb9-4264-902b-3879a0b7824e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dn6gl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-7hr7b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x5 over 2m54s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x5 over 2m54s)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x5 over 2m54s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-504633-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar13 23:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054621] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040971] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527621] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.806595] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.718708] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.715783] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.171003] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142829] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.235386] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Mar13 23:45] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.057845] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.706645] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.862236] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.155181] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.379152] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[ +12.986535] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.322868] kauditd_printk_skb: 43 callbacks suppressed
	[Mar13 23:46] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714] <==
	{"level":"warn","ts":"2024-03-13T23:53:27.292491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.323086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.368821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.377781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.383888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.396621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.408604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.417933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.423375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.42374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.426938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.435165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.44386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.453281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.457159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.460953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.469843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.482279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.507606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.515458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.519158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.523079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.525634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.535648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:53:27.544571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:53:27 up 8 min,  0 users,  load average: 0.27, 0.42, 0.25
	Linux ha-504633 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c] <==
	I0313 23:52:51.330082       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:53:01.341914       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:53:01.341963       1 main.go:227] handling current node
	I0313 23:53:01.342322       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:53:01.342333       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:53:01.342497       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:53:01.342525       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:53:01.342581       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:53:01.342606       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:53:11.354473       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:53:11.354575       1 main.go:227] handling current node
	I0313 23:53:11.354599       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:53:11.354617       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:53:11.354782       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:53:11.354805       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:53:11.354863       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:53:11.354880       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:53:21.362299       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:53:21.362351       1 main.go:227] handling current node
	I0313 23:53:21.362362       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:53:21.362369       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:53:21.362493       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:53:21.362521       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:53:21.362592       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:53:21.362618       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9] <==
	Trace[1382010347]: ["GuaranteedUpdate etcd3" audit-id:9d800dc8-9d5c-4334-b46d-d9312af116de,key:/minions/ha-504633-m02,type:*core.Node,resource:nodes 2897ms (23:47:47.973)
	Trace[1382010347]:  ---"Txn call completed" 2894ms (23:47:50.869)]
	Trace[1382010347]: ---"About to apply patch" 2894ms (23:47:50.869)
	Trace[1382010347]: [2.897617525s] [2.897617525s] END
	I0313 23:47:50.874188       1 trace.go:236] Trace[1891003387]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:77fe511b-1e93-448f-8542-759fa0cc00eb,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-504633,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (13-Mar-2024 23:47:47.105) (total time: 3768ms):
	Trace[1891003387]: ["GuaranteedUpdate etcd3" audit-id:77fe511b-1e93-448f-8542-759fa0cc00eb,key:/leases/kube-node-lease/ha-504633,type:*coordination.Lease,resource:leases.coordination.k8s.io 3768ms (23:47:47.105)
	Trace[1891003387]:  ---"Txn call completed" 3767ms (23:47:50.873)]
	Trace[1891003387]: [3.768087495s] [3.768087495s] END
	I0313 23:47:50.875545       1 trace.go:236] Trace[1251262935]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a2b7579e-f11c-4fdd-8bb1-1135281f6eb5,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-6jfanw7f7nh6bubgtbpmxrwaa4,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (13-Mar-2024 23:47:45.930) (total time: 4945ms):
	Trace[1251262935]: ["GuaranteedUpdate etcd3" audit-id:a2b7579e-f11c-4fdd-8bb1-1135281f6eb5,key:/leases/kube-system/apiserver-6jfanw7f7nh6bubgtbpmxrwaa4,type:*coordination.Lease,resource:leases.coordination.k8s.io 4945ms (23:47:45.930)
	Trace[1251262935]:  ---"Txn call completed" 4944ms (23:47:50.875)]
	Trace[1251262935]: [4.945246684s] [4.945246684s] END
	I0313 23:47:50.902375       1 trace.go:236] Trace[1177651587]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a69b7365-dad8-420a-90e3-79bbf70dbe0a,client:192.168.39.47,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-504633-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (13-Mar-2024 23:47:46.404) (total time: 4497ms):
	Trace[1177651587]: ["GuaranteedUpdate etcd3" audit-id:a69b7365-dad8-420a-90e3-79bbf70dbe0a,key:/minions/ha-504633-m02,type:*core.Node,resource:nodes 4497ms (23:47:46.404)
	Trace[1177651587]:  ---"Txn call completed" 4460ms (23:47:50.867)
	Trace[1177651587]:  ---"Txn call completed" 32ms (23:47:50.901)]
	Trace[1177651587]: ---"About to apply patch" 4461ms (23:47:50.867)
	Trace[1177651587]: ---"Object stored in database" 32ms (23:47:50.901)
	Trace[1177651587]: [4.49778148s] [4.49778148s] END
	I0313 23:47:50.911845       1 trace.go:236] Trace[667313303]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3b29c8fd-5302-42f6-95ab-621f32af71b0,client:192.168.39.47,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (13-Mar-2024 23:47:45.866) (total time: 5045ms):
	Trace[667313303]: [5.045079797s] [5.045079797s] END
	I0313 23:47:50.937552       1 trace.go:236] Trace[247215225]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:45535216-b5d7-433c-ae45-eab8672b8af7,client:192.168.39.47,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (13-Mar-2024 23:47:43.862) (total time: 7075ms):
	Trace[247215225]: ---"Write to database call failed" len:2991,err:pods "kube-apiserver-ha-504633-m02" already exists 18ms (23:47:50.937)
	Trace[247215225]: [7.075134913s] [7.075134913s] END
	W0313 23:51:24.992862       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.31]
	
	
	==> kube-controller-manager [f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1] <==
	I0313 23:49:55.430231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="110.843µs"
	I0313 23:49:59.938849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.252706ms"
	I0313 23:49:59.939150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.928µs"
	I0313 23:50:34.641412       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-504633-m04\" does not exist"
	I0313 23:50:34.656553       1 range_allocator.go:380] "Set node PodCIDR" node="ha-504633-m04" podCIDRs=["10.244.3.0/24"]
	I0313 23:50:34.685765       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7hr7b"
	I0313 23:50:34.699861       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hnxz6"
	I0313 23:50:34.811917       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-4pm44"
	I0313 23:50:34.887484       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-npx4n"
	I0313 23:50:34.943758       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-bmf5z"
	I0313 23:50:34.955698       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hnxz6"
	I0313 23:50:37.986035       1 event.go:307] "Event occurred" object="ha-504633-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller"
	I0313 23:50:38.000824       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-504633-m04"
	I0313 23:50:45.325321       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0313 23:51:43.029396       1 event.go:307] "Event occurred" object="ha-504633-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-504633-m02 status is now: NodeNotReady"
	I0313 23:51:43.032014       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0313 23:51:43.042655       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.069204       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.086438       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.104161       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.120549       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-zfjjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.142400       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4s9t5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.174126       1 event.go:307] "Event occurred" object="kube-system/kindnet-f4pz8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.185285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.704053ms"
	I0313 23:51:43.185414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.937µs"
	
	
	==> kube-proxy [ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a] <==
	I0313 23:45:29.711578       1 server_others.go:69] "Using iptables proxy"
	I0313 23:45:29.730452       1 node.go:141] Successfully retrieved node IP: 192.168.39.31
	I0313 23:45:29.778135       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:45:29.778173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:45:29.781710       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:45:29.782511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:45:29.782796       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:45:29.782835       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:45:29.784428       1 config.go:188] "Starting service config controller"
	I0313 23:45:29.785222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:45:29.785343       1 config.go:315] "Starting node config controller"
	I0313 23:45:29.785372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:45:29.785796       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:45:29.785829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:45:29.885734       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:45:29.885761       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:45:29.886938       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33] <==
	I0313 23:45:16.354108       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0313 23:49:03.728254       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fgcxp\": pod kube-proxy-fgcxp is already assigned to node \"ha-504633-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fgcxp" node="ha-504633-m03"
	E0313 23:49:03.728412       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7ef9b719-adf6-4d07-9d11-9df0b5e923a6(kube-system/kube-proxy-fgcxp) wasn't assumed so cannot be forgotten"
	E0313 23:49:03.728517       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fgcxp\": pod kube-proxy-fgcxp is already assigned to node \"ha-504633-m03\"" pod="kube-system/kube-proxy-fgcxp"
	I0313 23:49:03.728602       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fgcxp" node="ha-504633-m03"
	E0313 23:49:03.728755       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5gfqz\": pod kindnet-5gfqz is already assigned to node \"ha-504633-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-5gfqz" node="ha-504633-m03"
	E0313 23:49:03.728931       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod d8daf9d8-d130-4a0a-bfc8-a38d276444e1(kube-system/kindnet-5gfqz) wasn't assumed so cannot be forgotten"
	E0313 23:49:03.729044       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5gfqz\": pod kindnet-5gfqz is already assigned to node \"ha-504633-m03\"" pod="kube-system/kindnet-5gfqz"
	I0313 23:49:03.729143       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5gfqz" node="ha-504633-m03"
	E0313 23:49:49.750063       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zfjjt\": pod busybox-5b5d89c9d6-zfjjt is already assigned to node \"ha-504633-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-zfjjt" node="ha-504633-m02"
	E0313 23:49:49.750226       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zfjjt\": pod busybox-5b5d89c9d6-zfjjt is already assigned to node \"ha-504633-m02\"" pod="default/busybox-5b5d89c9d6-zfjjt"
	E0313 23:49:49.752680       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dx92g\": pod busybox-5b5d89c9d6-dx92g is already assigned to node \"ha-504633\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-dx92g" node="ha-504633"
	E0313 23:49:49.752878       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e4da8d7b-2fcc-46b3-a6a3-12f23d16de43(default/busybox-5b5d89c9d6-dx92g) wasn't assumed so cannot be forgotten"
	E0313 23:49:49.754271       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dx92g\": pod busybox-5b5d89c9d6-dx92g is already assigned to node \"ha-504633\"" pod="default/busybox-5b5d89c9d6-dx92g"
	I0313 23:49:49.755372       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-dx92g" node="ha-504633"
	E0313 23:50:34.722286       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7hr7b\": pod kube-proxy-7hr7b is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7hr7b" node="ha-504633-m04"
	E0313 23:50:34.723189       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 6283f13a-f061-4d3b-a492-30bffd8d4201(kube-system/kube-proxy-7hr7b) wasn't assumed so cannot be forgotten"
	E0313 23:50:34.723330       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7hr7b\": pod kube-proxy-7hr7b is already assigned to node \"ha-504633-m04\"" pod="kube-system/kube-proxy-7hr7b"
	I0313 23:50:34.723401       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7hr7b" node="ha-504633-m04"
	E0313 23:50:34.798955       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-npx4n\": pod kube-proxy-npx4n is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-npx4n" node="ha-504633-m04"
	E0313 23:50:34.799432       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-npx4n\": pod kube-proxy-npx4n is already assigned to node \"ha-504633-m04\"" pod="kube-system/kube-proxy-npx4n"
	E0313 23:50:34.800517       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pm44\": pod kindnet-4pm44 is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pm44" node="ha-504633-m04"
	E0313 23:50:34.800734       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ae5a1328-27b9-4887-9376-743463d7efda(kube-system/kindnet-4pm44) wasn't assumed so cannot be forgotten"
	E0313 23:50:34.800795       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pm44\": pod kindnet-4pm44 is already assigned to node \"ha-504633-m04\"" pod="kube-system/kindnet-4pm44"
	I0313 23:50:34.800828       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4pm44" node="ha-504633-m04"
	
	
	==> kubelet <==
	Mar 13 23:51:58 ha-504633 kubelet[1439]: E0313 23:51:58.777898    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:52:12 ha-504633 kubelet[1439]: I0313 23:52:12.777693    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:52:12 ha-504633 kubelet[1439]: E0313 23:52:12.783857    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:52:16 ha-504633 kubelet[1439]: E0313 23:52:16.828252    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 13 23:52:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 13 23:52:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 13 23:52:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 13 23:52:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 13 23:52:24 ha-504633 kubelet[1439]: I0313 23:52:24.779722    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:52:24 ha-504633 kubelet[1439]: E0313 23:52:24.781420    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:52:35 ha-504633 kubelet[1439]: I0313 23:52:35.777230    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:52:35 ha-504633 kubelet[1439]: E0313 23:52:35.777898    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:52:50 ha-504633 kubelet[1439]: I0313 23:52:50.776805    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:52:50 ha-504633 kubelet[1439]: E0313 23:52:50.779817    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:04 ha-504633 kubelet[1439]: I0313 23:53:04.776807    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:04 ha-504633 kubelet[1439]: E0313 23:53:04.777429    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:15 ha-504633 kubelet[1439]: I0313 23:53:15.776809    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:15 ha-504633 kubelet[1439]: E0313 23:53:15.778012    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:16 ha-504633 kubelet[1439]: E0313 23:53:16.828498    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 13 23:53:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 13 23:53:27 ha-504633 kubelet[1439]: I0313 23:53:27.776642    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:27 ha-504633 kubelet[1439]: E0313 23:53:27.777418    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-504633 -n ha-504633
helpers_test.go:261: (dbg) Run:  kubectl --context ha-504633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (142.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (55.43s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (3.199127067s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:32.244857   27077 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:32.245133   27077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:32.245147   27077 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:32.245154   27077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:32.245358   27077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:32.245540   27077 out.go:298] Setting JSON to false
	I0313 23:53:32.245568   27077 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:32.245672   27077 notify.go:220] Checking for updates...
	I0313 23:53:32.245988   27077 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:32.246003   27077 status.go:255] checking status of ha-504633 ...
	I0313 23:53:32.246460   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.246506   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.265365   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0313 23:53:32.265889   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.266583   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.266606   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.267027   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.267248   27077 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:32.268915   27077 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:32.268929   27077 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:32.269223   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.269261   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.284777   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0313 23:53:32.285216   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.285698   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.285725   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.286020   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.286259   27077 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:32.289025   27077 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:32.289525   27077 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:32.289558   27077 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:32.289706   27077 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:32.290005   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.290057   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.304666   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0313 23:53:32.305113   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.305634   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.305668   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.306016   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.306193   27077 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:32.306488   27077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:32.306509   27077 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:32.309146   27077 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:32.309527   27077 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:32.309548   27077 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:32.309733   27077 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:32.309933   27077 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:32.310096   27077 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:32.310250   27077 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:32.395341   27077 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:32.401566   27077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:32.417549   27077 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:32.417579   27077 api_server.go:166] Checking apiserver status ...
	I0313 23:53:32.417622   27077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:32.434853   27077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:32.445528   27077 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:32.445569   27077 ssh_runner.go:195] Run: ls
	I0313 23:53:32.450590   27077 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:32.458845   27077 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:32.458865   27077 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:32.458874   27077 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:32.458890   27077 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:32.459230   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.459262   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.474489   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0313 23:53:32.474922   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.475350   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.475373   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.475762   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.475968   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:32.477508   27077 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:32.477520   27077 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:32.477789   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.477821   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.491965   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0313 23:53:32.492326   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.492759   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.492784   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.493135   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.493348   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:32.495836   27077 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:32.496255   27077 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:32.496274   27077 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:32.496423   27077 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:32.496705   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:32.496736   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:32.511000   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0313 23:53:32.511412   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:32.511890   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:32.511914   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:32.512203   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:32.512374   27077 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:32.512539   27077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:32.512557   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:32.515380   27077 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:32.515750   27077 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:32.515773   27077 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:32.515896   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:32.516048   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:32.516203   27077 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:32.516322   27077 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:35.031147   27077 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:35.031242   27077 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:35.031273   27077 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:35.031286   27077 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:35.031320   27077 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:35.031331   27077 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:35.031770   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.031829   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.049348   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0313 23:53:35.049892   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.050533   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.050565   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.050912   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.051105   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:35.052787   27077 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:35.052803   27077 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:35.053105   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.053149   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.067735   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0313 23:53:35.068067   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.068540   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.068563   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.068850   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.069046   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:35.071897   27077 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:35.072324   27077 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:35.072352   27077 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:35.072505   27077 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:35.072909   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.072956   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.087262   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0313 23:53:35.087611   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.088059   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.088083   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.088419   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.088613   27077 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:35.088776   27077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:35.088796   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:35.091475   27077 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:35.091849   27077 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:35.091872   27077 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:35.092043   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:35.092213   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:35.092385   27077 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:35.092544   27077 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:35.171982   27077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:35.189403   27077 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:35.189429   27077 api_server.go:166] Checking apiserver status ...
	I0313 23:53:35.189466   27077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:35.205148   27077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:35.215782   27077 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:35.215836   27077 ssh_runner.go:195] Run: ls
	I0313 23:53:35.220362   27077 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:35.224911   27077 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:35.224933   27077 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:35.224943   27077 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:35.224959   27077 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:35.225218   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.225247   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.239696   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0313 23:53:35.240083   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.240564   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.240583   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.240924   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.241125   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:35.242618   27077 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:35.242632   27077 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:35.242886   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.242915   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.256858   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0313 23:53:35.257290   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.257707   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.257726   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.257986   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.258180   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:35.260653   27077 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:35.261035   27077 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:35.261057   27077 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:35.261164   27077 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:35.261496   27077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.261531   27077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.275747   27077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0313 23:53:35.276143   27077 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.276578   27077 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.276597   27077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.276904   27077 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.277129   27077 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:35.277300   27077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:35.277324   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:35.280236   27077 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:35.280728   27077 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:35.280752   27077 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:35.280959   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:35.281103   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:35.281237   27077 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:35.281388   27077 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:35.370400   27077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:35.386523   27077 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
E0313 23:53:36.335272   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (2.547276455s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:35.964868   27162 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:35.964977   27162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:35.964988   27162 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:35.964993   27162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:35.965258   27162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:35.965419   27162 out.go:298] Setting JSON to false
	I0313 23:53:35.965443   27162 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:35.965554   27162 notify.go:220] Checking for updates...
	I0313 23:53:35.965849   27162 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:35.965867   27162 status.go:255] checking status of ha-504633 ...
	I0313 23:53:35.966245   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.966304   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:35.988367   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0313 23:53:35.988724   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:35.989283   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:35.989310   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:35.989676   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:35.989845   27162 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:35.991723   27162 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:35.991743   27162 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:35.992058   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:35.992096   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:36.007089   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0313 23:53:36.007444   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:36.007941   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:36.007967   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:36.008232   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:36.008394   27162 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:36.011154   27162 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:36.011583   27162 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:36.011623   27162 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:36.011746   27162 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:36.012095   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:36.012160   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:36.026280   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0313 23:53:36.026617   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:36.027083   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:36.027112   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:36.027413   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:36.027620   27162 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:36.027823   27162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:36.027854   27162 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:36.030319   27162 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:36.030797   27162 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:36.030833   27162 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:36.030924   27162 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:36.031086   27162 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:36.031244   27162 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:36.031366   27162 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:36.114990   27162 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:36.121095   27162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:36.135814   27162 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:36.135843   27162 api_server.go:166] Checking apiserver status ...
	I0313 23:53:36.135892   27162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:36.150405   27162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:36.160911   27162 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:36.160977   27162 ssh_runner.go:195] Run: ls
	I0313 23:53:36.165672   27162 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:36.170141   27162 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:36.170163   27162 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:36.170174   27162 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:36.170193   27162 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:36.170519   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:36.170565   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:36.185191   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0313 23:53:36.185540   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:36.186177   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:36.186205   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:36.186513   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:36.186778   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:36.188398   27162 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:36.188413   27162 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:36.188711   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:36.188752   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:36.203499   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0313 23:53:36.203890   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:36.204440   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:36.204462   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:36.204815   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:36.205019   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:36.207659   27162 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:36.208002   27162 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:36.208032   27162 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:36.208192   27162 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:36.208536   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:36.208576   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:36.223382   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0313 23:53:36.223760   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:36.224192   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:36.224207   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:36.224581   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:36.224771   27162 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:36.224976   27162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:36.224996   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:36.227869   27162 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:36.228296   27162 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:36.228317   27162 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:36.228490   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:36.228653   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:36.228781   27162 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:36.228916   27162 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:38.103092   27162 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:38.103193   27162 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:38.103242   27162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:38.103256   27162 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:38.103278   27162 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:38.103288   27162 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:38.103592   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.103633   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.118431   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0313 23:53:38.118933   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.119411   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.119434   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.119737   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.119894   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:38.121331   27162 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:38.121348   27162 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:38.121642   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.121692   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.137149   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0313 23:53:38.137534   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.138097   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.138116   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.138520   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.138705   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:38.141813   27162 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:38.142215   27162 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:38.142251   27162 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:38.142477   27162 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:38.142884   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.142924   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.158531   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I0313 23:53:38.158968   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.159444   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.159471   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.159790   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.159958   27162 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:38.160153   27162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:38.160173   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:38.162676   27162 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:38.163100   27162 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:38.163121   27162 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:38.163217   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:38.163378   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:38.163520   27162 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:38.163651   27162 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:38.242925   27162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:38.259636   27162 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:38.259660   27162 api_server.go:166] Checking apiserver status ...
	I0313 23:53:38.259691   27162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:38.273803   27162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:38.283729   27162 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:38.283773   27162 ssh_runner.go:195] Run: ls
	I0313 23:53:38.288994   27162 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:38.294693   27162 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:38.294713   27162 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:38.294722   27162 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:38.294738   27162 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:38.295138   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.295183   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.310271   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0313 23:53:38.310660   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.311203   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.311226   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.311511   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.311761   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:38.313409   27162 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:38.313424   27162 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:38.313738   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.313782   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.328164   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36699
	I0313 23:53:38.328487   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.328908   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.328926   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.329250   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.329440   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:38.332132   27162 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:38.332506   27162 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:38.332541   27162 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:38.332657   27162 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:38.333001   27162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:38.333059   27162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:38.348322   27162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0313 23:53:38.348709   27162 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:38.349184   27162 main.go:141] libmachine: Using API Version  1
	I0313 23:53:38.349205   27162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:38.349548   27162 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:38.349750   27162 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:38.349937   27162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:38.349961   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:38.353423   27162 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:38.353912   27162 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:38.353937   27162 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:38.354083   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:38.354266   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:38.354438   27162 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:38.354677   27162 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:38.438156   27162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:38.452475   27162 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (4.761318792s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:39.903003   27257 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:39.903113   27257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:39.903121   27257 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:39.903126   27257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:39.903317   27257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:39.903488   27257 out.go:298] Setting JSON to false
	I0313 23:53:39.903512   27257 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:39.903556   27257 notify.go:220] Checking for updates...
	I0313 23:53:39.903837   27257 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:39.903849   27257 status.go:255] checking status of ha-504633 ...
	I0313 23:53:39.904234   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:39.904295   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:39.924400   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0313 23:53:39.924849   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:39.925432   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:39.925457   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:39.925807   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:39.926069   27257 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:39.927867   27257 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:39.927887   27257 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:39.928218   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:39.928252   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:39.942736   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
	I0313 23:53:39.943146   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:39.943530   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:39.943555   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:39.943864   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:39.944055   27257 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:39.946674   27257 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:39.947111   27257 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:39.947134   27257 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:39.947274   27257 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:39.947566   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:39.947596   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:39.963119   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42065
	I0313 23:53:39.963459   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:39.963913   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:39.963932   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:39.964211   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:39.964400   27257 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:39.964604   27257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:39.964633   27257 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:39.967398   27257 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:39.967807   27257 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:39.967839   27257 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:39.967996   27257 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:39.968130   27257 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:39.968284   27257 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:39.968393   27257 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:40.050970   27257 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:40.057475   27257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:40.073600   27257 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:40.073633   27257 api_server.go:166] Checking apiserver status ...
	I0313 23:53:40.073679   27257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:40.089653   27257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:40.100320   27257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:40.100370   27257 ssh_runner.go:195] Run: ls
	I0313 23:53:40.104987   27257 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:40.109351   27257 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:40.109371   27257 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:40.109381   27257 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:40.109397   27257 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:40.109777   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:40.109824   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:40.124886   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I0313 23:53:40.125329   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:40.125812   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:40.125836   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:40.126112   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:40.126302   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:40.127992   27257 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:40.128007   27257 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:40.128285   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:40.128320   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:40.142689   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0313 23:53:40.143096   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:40.143540   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:40.143563   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:40.143873   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:40.144037   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:40.146882   27257 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:40.147241   27257 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:40.147268   27257 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:40.147369   27257 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:40.147635   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:40.147676   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:40.162702   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0313 23:53:40.163076   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:40.163521   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:40.163544   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:40.163889   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:40.164067   27257 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:40.164245   27257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:40.164263   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:40.166789   27257 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:40.167177   27257 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:40.167206   27257 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:40.167341   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:40.167516   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:40.167673   27257 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:40.167823   27257 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:41.175092   27257 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:41.175158   27257 retry.go:31] will retry after 371.697358ms: dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:44.247068   27257 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:44.247159   27257 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:44.247181   27257 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:44.247194   27257 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:44.247243   27257 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:44.247254   27257 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:44.247649   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.247701   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.262494   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0313 23:53:44.262924   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.263371   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.263393   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.263717   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.263911   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:44.265381   27257 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:44.265399   27257 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:44.265702   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.265737   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.279914   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I0313 23:53:44.280366   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.280939   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.280976   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.281290   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.281512   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:44.284601   27257 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:44.285061   27257 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:44.285100   27257 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:44.285292   27257 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:44.285700   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.285742   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.300617   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0313 23:53:44.300996   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.301404   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.301425   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.301709   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.301909   27257 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:44.302094   27257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:44.302113   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:44.304715   27257 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:44.305127   27257 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:44.305164   27257 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:44.305275   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:44.305414   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:44.305570   27257 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:44.305696   27257 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:44.388113   27257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:44.404016   27257 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:44.404041   27257 api_server.go:166] Checking apiserver status ...
	I0313 23:53:44.404079   27257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:44.419753   27257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:44.430154   27257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:44.430219   27257 ssh_runner.go:195] Run: ls
	I0313 23:53:44.434968   27257 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:44.442971   27257 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:44.442992   27257 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:44.443000   27257 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:44.443013   27257 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:44.443286   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.443317   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.458258   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34057
	I0313 23:53:44.458663   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.459138   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.459161   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.459476   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.459636   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:44.461272   27257 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:44.461289   27257 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:44.461694   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.461735   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.476974   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0313 23:53:44.477343   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.477750   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.477771   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.478151   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.478327   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:44.480930   27257 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:44.481259   27257 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:44.481291   27257 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:44.481393   27257 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:44.481729   27257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:44.481777   27257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:44.499365   27257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42845
	I0313 23:53:44.499811   27257 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:44.500272   27257 main.go:141] libmachine: Using API Version  1
	I0313 23:53:44.500293   27257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:44.500606   27257 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:44.500872   27257 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:44.501096   27257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:44.501130   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:44.503902   27257 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:44.504379   27257 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:44.504411   27257 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:44.504606   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:44.504771   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:44.504920   27257 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:44.505049   27257 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:44.591097   27257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:44.604976   27257 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (3.764510355s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:47.425609   27362 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:47.425848   27362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:47.425857   27362 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:47.425862   27362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:47.426029   27362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:47.426201   27362 out.go:298] Setting JSON to false
	I0313 23:53:47.426225   27362 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:47.426346   27362 notify.go:220] Checking for updates...
	I0313 23:53:47.426558   27362 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:47.426571   27362 status.go:255] checking status of ha-504633 ...
	I0313 23:53:47.426948   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.426995   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.446864   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0313 23:53:47.447266   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.448007   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.448054   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.448425   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.448661   27362 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:47.450286   27362 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:47.450307   27362 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:47.450620   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.450654   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.464890   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0313 23:53:47.465265   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.465695   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.465713   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.466059   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.466279   27362 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:47.468752   27362 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:47.469169   27362 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:47.469204   27362 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:47.469331   27362 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:47.469621   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.469661   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.483827   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0313 23:53:47.484247   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.484664   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.484680   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.484982   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.485145   27362 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:47.485315   27362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:47.485340   27362 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:47.487974   27362 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:47.488478   27362 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:47.488504   27362 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:47.488669   27362 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:47.488814   27362 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:47.488952   27362 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:47.489093   27362 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:47.579112   27362 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:47.585453   27362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:47.601273   27362 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:47.601295   27362 api_server.go:166] Checking apiserver status ...
	I0313 23:53:47.601337   27362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:47.617782   27362 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:47.628631   27362 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:47.628690   27362 ssh_runner.go:195] Run: ls
	I0313 23:53:47.633597   27362 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:47.641346   27362 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:47.641367   27362 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:47.641379   27362 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:47.641404   27362 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:47.641771   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.641817   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.656652   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0313 23:53:47.657064   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.657495   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.657515   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.657816   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.658021   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:47.659663   27362 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:47.659680   27362 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:47.659986   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.660017   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.675075   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0313 23:53:47.675503   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.675931   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.675952   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.676248   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.676423   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:47.679148   27362 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:47.679617   27362 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:47.679643   27362 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:47.679807   27362 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:47.680074   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:47.680104   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:47.694802   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45985
	I0313 23:53:47.695231   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:47.695746   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:47.695769   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:47.696090   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:47.696314   27362 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:47.696498   27362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:47.696516   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:47.699521   27362 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:47.699943   27362 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:47.699982   27362 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:47.700165   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:47.700356   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:47.700522   27362 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:47.700691   27362 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:50.775065   27362 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:50.775155   27362 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:50.775178   27362 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:50.775188   27362 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:50.775214   27362 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:50.775226   27362 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:50.775533   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:50.775581   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:50.790072   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34725
	I0313 23:53:50.790439   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:50.790880   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:50.790903   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:50.791231   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:50.791418   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:50.792808   27362 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:50.792822   27362 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:50.793129   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:50.793160   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:50.807603   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0313 23:53:50.808034   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:50.808567   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:50.808587   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:50.808935   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:50.809155   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:50.811895   27362 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:50.812318   27362 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:50.812345   27362 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:50.812476   27362 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:50.812885   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:50.812933   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:50.827637   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
	I0313 23:53:50.828041   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:50.828504   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:50.828532   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:50.828830   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:50.829014   27362 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:50.829205   27362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:50.829228   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:50.831780   27362 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:50.832200   27362 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:50.832229   27362 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:50.832364   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:50.832554   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:50.832689   27362 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:50.832811   27362 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:50.920122   27362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:50.938248   27362 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:50.938274   27362 api_server.go:166] Checking apiserver status ...
	I0313 23:53:50.938305   27362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:50.955459   27362 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:50.967234   27362 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:50.967289   27362 ssh_runner.go:195] Run: ls
	I0313 23:53:50.972263   27362 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:50.977042   27362 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:50.977072   27362 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:50.977084   27362 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:50.977101   27362 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:50.977456   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:50.977492   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:50.992040   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0313 23:53:50.992475   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:50.992928   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:50.992948   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:50.993263   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:50.993454   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:50.995087   27362 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:50.995101   27362 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:50.995410   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:50.995442   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:51.010938   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0313 23:53:51.011412   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:51.011847   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:51.011880   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:51.012165   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:51.012354   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:51.015328   27362 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:51.015776   27362 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:51.015807   27362 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:51.015968   27362 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:51.016280   27362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:51.016317   27362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:51.030502   27362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0313 23:53:51.030920   27362 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:51.031344   27362 main.go:141] libmachine: Using API Version  1
	I0313 23:53:51.031364   27362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:51.031667   27362 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:51.031864   27362 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:51.032030   27362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:51.032047   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:51.034421   27362 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:51.034790   27362 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:51.034814   27362 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:51.034970   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:51.035120   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:51.035265   27362 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:51.035429   27362 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:51.118633   27362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:51.132938   27362 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (3.766670992s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:53:54.464474   27457 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:53:54.464611   27457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:54.464633   27457 out.go:304] Setting ErrFile to fd 2...
	I0313 23:53:54.464658   27457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:53:54.465176   27457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:53:54.465479   27457 out.go:298] Setting JSON to false
	I0313 23:53:54.465524   27457 mustload.go:65] Loading cluster: ha-504633
	I0313 23:53:54.465630   27457 notify.go:220] Checking for updates...
	I0313 23:53:54.466051   27457 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:53:54.466073   27457 status.go:255] checking status of ha-504633 ...
	I0313 23:53:54.466546   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.466614   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.487253   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0313 23:53:54.487642   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.488173   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.488195   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.488532   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.488754   27457 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:53:54.490431   27457 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:53:54.490449   27457 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:54.490850   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.490888   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.505590   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0313 23:53:54.506011   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.506403   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.506432   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.506756   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.506989   27457 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:53:54.509942   27457 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:54.510423   27457 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:54.510454   27457 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:54.510541   27457 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:53:54.510883   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.510954   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.525194   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0313 23:53:54.525582   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.526059   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.526095   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.526413   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.526584   27457 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:53:54.526779   27457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:54.526814   27457 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:53:54.529893   27457 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:54.530379   27457 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:53:54.530413   27457 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:53:54.530581   27457 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:53:54.530788   27457 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:53:54.530945   27457 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:53:54.531092   27457 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:53:54.619391   27457 ssh_runner.go:195] Run: systemctl --version
	I0313 23:53:54.627090   27457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:54.645162   27457 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:54.645195   27457 api_server.go:166] Checking apiserver status ...
	I0313 23:53:54.645237   27457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:54.661145   27457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:53:54.675198   27457 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:54.675258   27457 ssh_runner.go:195] Run: ls
	I0313 23:53:54.681246   27457 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:54.686400   27457 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:54.686421   27457 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:53:54.686430   27457 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:54.686449   27457 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:53:54.686817   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.686854   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.702970   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0313 23:53:54.703472   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.703931   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.703957   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.704263   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.704438   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:53:54.705977   27457 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:53:54.706005   27457 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:54.706400   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.706443   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.720591   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0313 23:53:54.721009   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.721466   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.721495   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.721818   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.721994   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:53:54.724611   27457 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:54.725019   27457 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:54.725038   27457 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:54.725207   27457 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:53:54.725557   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:54.725602   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:54.742067   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0313 23:53:54.742492   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:54.743002   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:54.743023   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:54.743342   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:54.743528   27457 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:53:54.743695   27457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:54.743715   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:53:54.746719   27457 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:54.747250   27457 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:53:54.747282   27457 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:53:54.747396   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:53:54.747546   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:53:54.747688   27457 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:53:54.747819   27457 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:53:57.815027   27457 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:53:57.815124   27457 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:53:57.815148   27457 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:57.815160   27457 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:53:57.815187   27457 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:53:57.815199   27457 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:53:57.815526   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:57.815586   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:57.830445   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0313 23:53:57.830955   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:57.831433   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:57.831466   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:57.831752   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:57.831933   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:53:57.833472   27457 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:53:57.833491   27457 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:57.833791   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:57.833827   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:57.848291   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0313 23:53:57.848700   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:57.849155   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:57.849184   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:57.849505   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:57.849723   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:53:57.852658   27457 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:57.853165   27457 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:57.853189   27457 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:57.853367   27457 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:53:57.853786   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:57.853834   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:57.870171   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0313 23:53:57.870580   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:57.871138   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:57.871155   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:57.871501   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:57.871736   27457 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:53:57.871972   27457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:57.871996   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:53:57.875155   27457 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:57.875609   27457 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:53:57.875638   27457 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:53:57.875783   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:53:57.875945   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:53:57.876101   27457 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:53:57.876244   27457 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:53:57.959817   27457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:57.977594   27457 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:53:57.977620   27457 api_server.go:166] Checking apiserver status ...
	I0313 23:53:57.977650   27457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:53:57.991351   27457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:53:58.001809   27457 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:53:58.001871   27457 ssh_runner.go:195] Run: ls
	I0313 23:53:58.007033   27457 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:53:58.011661   27457 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:53:58.011706   27457 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:53:58.011727   27457 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:53:58.011751   27457 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:53:58.012042   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:58.012082   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:58.026736   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0313 23:53:58.027165   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:58.027679   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:58.027706   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:58.028061   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:58.028335   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:53:58.029821   27457 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:53:58.029838   27457 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:58.030146   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:58.030186   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:58.045780   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I0313 23:53:58.046238   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:58.046849   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:58.046877   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:58.047164   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:58.047371   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:53:58.050220   27457 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:58.050679   27457 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:58.050715   27457 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:58.050881   27457 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:53:58.051254   27457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:53:58.051297   27457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:53:58.066155   27457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0313 23:53:58.066622   27457 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:53:58.067155   27457 main.go:141] libmachine: Using API Version  1
	I0313 23:53:58.067176   27457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:53:58.067429   27457 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:53:58.067620   27457 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:53:58.067817   27457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:53:58.067840   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:53:58.070371   27457 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:58.070709   27457 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:53:58.070751   27457 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:53:58.070849   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:53:58.071037   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:53:58.071186   27457 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:53:58.071347   27457 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:53:58.155063   27457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:53:58.171773   27457 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
E0313 23:54:04.020184   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (3.756983046s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:54:03.501364   27563 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:54:03.501470   27563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:03.501478   27563 out.go:304] Setting ErrFile to fd 2...
	I0313 23:54:03.501482   27563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:03.501746   27563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:54:03.501938   27563 out.go:298] Setting JSON to false
	I0313 23:54:03.501964   27563 mustload.go:65] Loading cluster: ha-504633
	I0313 23:54:03.502117   27563 notify.go:220] Checking for updates...
	I0313 23:54:03.502319   27563 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:54:03.502331   27563 status.go:255] checking status of ha-504633 ...
	I0313 23:54:03.502704   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.502786   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.523181   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0313 23:54:03.523621   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.524174   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.524195   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.524511   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.524738   27563 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:54:03.526412   27563 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:54:03.526425   27563 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:03.526756   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.526820   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.541641   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I0313 23:54:03.542062   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.542681   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.542706   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.543077   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.543265   27563 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:54:03.545883   27563 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:03.546436   27563 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:03.546464   27563 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:03.546591   27563 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:03.546966   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.547035   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.561553   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0313 23:54:03.561960   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.562441   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.562462   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.562794   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.562985   27563 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:54:03.563177   27563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:03.563198   27563 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:54:03.566210   27563 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:03.566718   27563 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:03.566750   27563 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:03.566870   27563 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:54:03.567054   27563 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:54:03.567220   27563 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:54:03.567376   27563 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:54:03.656046   27563 ssh_runner.go:195] Run: systemctl --version
	I0313 23:54:03.663773   27563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:03.680891   27563 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:03.680928   27563 api_server.go:166] Checking apiserver status ...
	I0313 23:54:03.680970   27563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:03.697193   27563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:54:03.708342   27563 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:03.708397   27563 ssh_runner.go:195] Run: ls
	I0313 23:54:03.713631   27563 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:03.718257   27563 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:03.718278   27563 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:54:03.718287   27563 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:03.718300   27563 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:54:03.718625   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.718661   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.733366   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0313 23:54:03.733734   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.734194   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.734216   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.734616   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.734825   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:54:03.736400   27563 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0313 23:54:03.736426   27563 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:54:03.736866   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.736909   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.751477   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0313 23:54:03.751885   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.752322   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.752345   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.752696   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.752898   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:54:03.755559   27563 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:54:03.755972   27563 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:54:03.756002   27563 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:54:03.756116   27563 host.go:66] Checking if "ha-504633-m02" exists ...
	I0313 23:54:03.756415   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:03.756456   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:03.773518   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0313 23:54:03.773929   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:03.774326   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:03.774366   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:03.774732   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:03.774974   27563 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:54:03.775171   27563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:03.775197   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:54:03.777725   27563 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:54:03.778120   27563 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:54:03.778164   27563 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:54:03.778273   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:54:03.778457   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:54:03.778623   27563 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:54:03.778782   27563 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	W0313 23:54:06.843031   27563 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.47:22: connect: no route to host
	W0313 23:54:06.843154   27563 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0313 23:54:06.843176   27563 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:54:06.843185   27563 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0313 23:54:06.843230   27563 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	I0313 23:54:06.843241   27563 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:54:06.843574   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:06.843621   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:06.858520   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0313 23:54:06.858972   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:06.859511   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:06.859540   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:06.859862   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:06.860062   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:54:06.861767   27563 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:54:06.861781   27563 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:06.862053   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:06.862088   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:06.877576   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0313 23:54:06.877992   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:06.878473   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:06.878498   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:06.878814   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:06.879010   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:54:06.881773   27563 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:06.882267   27563 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:06.882286   27563 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:06.882464   27563 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:06.882837   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:06.882871   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:06.897328   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0313 23:54:06.897764   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:06.898146   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:06.898185   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:06.898705   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:06.898913   27563 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:54:06.899066   27563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:06.899090   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:54:06.902343   27563 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:06.902881   27563 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:06.902905   27563 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:06.903081   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:54:06.903352   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:54:06.903514   27563 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:54:06.903639   27563 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:54:06.983922   27563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:06.999677   27563 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:06.999706   27563 api_server.go:166] Checking apiserver status ...
	I0313 23:54:06.999747   27563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:07.015759   27563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:54:07.026074   27563 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:07.026137   27563 ssh_runner.go:195] Run: ls
	I0313 23:54:07.031440   27563 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:07.036246   27563 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:07.036270   27563 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:54:07.036281   27563 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:07.036300   27563 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:54:07.036601   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:07.036658   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:07.051314   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0313 23:54:07.051797   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:07.052420   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:07.052444   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:07.052762   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:07.052942   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:54:07.054601   27563 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:54:07.054617   27563 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:07.054995   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:07.055042   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:07.069433   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0313 23:54:07.069936   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:07.070390   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:07.070411   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:07.070810   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:07.071005   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:54:07.074001   27563 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:07.074452   27563 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:07.074491   27563 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:07.074652   27563 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:07.074946   27563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:07.074977   27563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:07.090652   27563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0313 23:54:07.091103   27563 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:07.091608   27563 main.go:141] libmachine: Using API Version  1
	I0313 23:54:07.091634   27563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:07.091920   27563 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:07.092092   27563 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:54:07.092268   27563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:07.092287   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:54:07.095132   27563 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:07.095597   27563 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:07.095635   27563 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:07.095780   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:54:07.095935   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:54:07.096066   27563 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:54:07.096215   27563 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:54:07.181699   27563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:07.198435   27563 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 7 (651.463131ms)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:54:15.421886   27691 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:54:15.422111   27691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:15.422119   27691 out.go:304] Setting ErrFile to fd 2...
	I0313 23:54:15.422123   27691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:15.422299   27691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:54:15.422472   27691 out.go:298] Setting JSON to false
	I0313 23:54:15.422496   27691 mustload.go:65] Loading cluster: ha-504633
	I0313 23:54:15.422543   27691 notify.go:220] Checking for updates...
	I0313 23:54:15.422908   27691 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:54:15.422925   27691 status.go:255] checking status of ha-504633 ...
	I0313 23:54:15.423385   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.423446   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.442843   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0313 23:54:15.443439   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.443989   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.444008   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.444329   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.444531   27691 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:54:15.446162   27691 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:54:15.446179   27691 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:15.446444   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.446485   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.461173   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0313 23:54:15.461602   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.462169   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.462203   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.462538   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.462815   27691 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:54:15.465992   27691 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:15.466522   27691 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:15.466574   27691 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:15.466710   27691 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:15.467057   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.467099   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.482618   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0313 23:54:15.483073   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.483554   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.483576   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.483874   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.484100   27691 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:54:15.484298   27691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:15.484327   27691 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:54:15.486918   27691 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:15.487418   27691 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:15.487438   27691 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:15.487601   27691 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:54:15.487745   27691 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:54:15.487904   27691 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:54:15.488035   27691 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:54:15.571282   27691 ssh_runner.go:195] Run: systemctl --version
	I0313 23:54:15.577989   27691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:15.594972   27691 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:15.595005   27691 api_server.go:166] Checking apiserver status ...
	I0313 23:54:15.595048   27691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:15.610535   27691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:54:15.620792   27691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:15.620859   27691 ssh_runner.go:195] Run: ls
	I0313 23:54:15.626094   27691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:15.633013   27691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:15.633043   27691 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:54:15.633054   27691 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:15.633072   27691 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:54:15.633373   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.633408   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.648140   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0313 23:54:15.648563   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.648996   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.649017   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.649432   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.649650   27691 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:54:15.651365   27691 status.go:330] ha-504633-m02 host status = "Stopped" (err=<nil>)
	I0313 23:54:15.651383   27691 status.go:343] host is not running, skipping remaining checks
	I0313 23:54:15.651390   27691 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:15.651410   27691 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:54:15.651739   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.651777   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.667428   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0313 23:54:15.667818   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.668370   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.668396   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.668744   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.668931   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:54:15.670701   27691 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:54:15.670719   27691 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:15.671130   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.671182   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.686294   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0313 23:54:15.686713   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.687211   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.687255   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.687611   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.687838   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:54:15.690833   27691 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:15.691300   27691 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:15.691340   27691 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:15.691421   27691 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:15.691734   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.691780   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.706852   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
	I0313 23:54:15.707271   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.707735   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.707754   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.708041   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.708211   27691 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:54:15.708385   27691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:15.708405   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:54:15.711481   27691 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:15.711984   27691 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:15.712022   27691 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:15.712168   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:54:15.712390   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:54:15.712533   27691 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:54:15.712688   27691 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:54:15.791019   27691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:15.807516   27691 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:15.807550   27691 api_server.go:166] Checking apiserver status ...
	I0313 23:54:15.807641   27691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:15.823495   27691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:54:15.835897   27691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:15.835946   27691 ssh_runner.go:195] Run: ls
	I0313 23:54:15.841642   27691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:15.846497   27691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:15.846522   27691 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:54:15.846534   27691 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:15.846554   27691 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:54:15.846891   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.846935   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.863154   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0313 23:54:15.863667   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.864191   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.864211   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.864529   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.864704   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:54:15.866260   27691 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:54:15.866281   27691 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:15.866628   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.866664   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.882454   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0313 23:54:15.882926   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.883364   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.883388   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.883762   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.883947   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:54:15.886962   27691 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:15.887394   27691 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:15.887432   27691 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:15.887533   27691 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:15.887937   27691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:15.887983   27691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:15.903813   27691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0313 23:54:15.904318   27691 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:15.904943   27691 main.go:141] libmachine: Using API Version  1
	I0313 23:54:15.904964   27691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:15.905277   27691 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:15.905517   27691 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:54:15.905744   27691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:15.905764   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:54:15.908652   27691 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:15.909102   27691 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:15.909141   27691 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:15.909257   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:54:15.909443   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:54:15.909627   27691 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:54:15.909766   27691 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:54:15.998942   27691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:16.015083   27691 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 7 (641.974626ms)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-504633-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:54:24.548949   27784 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:54:24.549202   27784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:24.549212   27784 out.go:304] Setting ErrFile to fd 2...
	I0313 23:54:24.549216   27784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:24.549483   27784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:54:24.549705   27784 out.go:298] Setting JSON to false
	I0313 23:54:24.549735   27784 mustload.go:65] Loading cluster: ha-504633
	I0313 23:54:24.549773   27784 notify.go:220] Checking for updates...
	I0313 23:54:24.550080   27784 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:54:24.550093   27784 status.go:255] checking status of ha-504633 ...
	I0313 23:54:24.550507   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.550582   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.567361   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0313 23:54:24.567803   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.568473   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.568513   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.568911   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.569131   27784 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:54:24.570945   27784 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0313 23:54:24.570967   27784 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:24.571239   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.571272   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.586079   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34919
	I0313 23:54:24.586445   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.586920   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.586944   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.587259   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.587478   27784 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:54:24.590262   27784 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:24.590714   27784 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:24.590741   27784 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:24.590872   27784 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:54:24.591148   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.591183   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.606144   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39827
	I0313 23:54:24.606516   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.606941   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.606959   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.607262   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.607441   27784 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:54:24.607644   27784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:24.607676   27784 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:54:24.610437   27784 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:24.610832   27784 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:54:24.610858   27784 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:54:24.610987   27784 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:54:24.611169   27784 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:54:24.611318   27784 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:54:24.611465   27784 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:54:24.695575   27784 ssh_runner.go:195] Run: systemctl --version
	I0313 23:54:24.702406   27784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:24.716406   27784 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:24.716437   27784 api_server.go:166] Checking apiserver status ...
	I0313 23:54:24.716474   27784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:24.730634   27784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0313 23:54:24.742045   27784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:24.742113   27784 ssh_runner.go:195] Run: ls
	I0313 23:54:24.747178   27784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:24.752052   27784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:24.752074   27784 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0313 23:54:24.752086   27784 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:24.752106   27784 status.go:255] checking status of ha-504633-m02 ...
	I0313 23:54:24.752394   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.752434   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.767019   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0313 23:54:24.767484   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.768060   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.768083   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.768491   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.768703   27784 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:54:24.770664   27784 status.go:330] ha-504633-m02 host status = "Stopped" (err=<nil>)
	I0313 23:54:24.770678   27784 status.go:343] host is not running, skipping remaining checks
	I0313 23:54:24.770684   27784 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:24.770715   27784 status.go:255] checking status of ha-504633-m03 ...
	I0313 23:54:24.771016   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.771057   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.786283   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I0313 23:54:24.786740   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.787346   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.787370   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.787807   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.788076   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:54:24.790364   27784 status.go:330] ha-504633-m03 host status = "Running" (err=<nil>)
	I0313 23:54:24.790381   27784 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:24.790658   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.790694   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.805454   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0313 23:54:24.806008   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.806520   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.806543   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.806971   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.807207   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:54:24.810202   27784 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:24.810700   27784 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:24.810729   27784 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:24.810865   27784 host.go:66] Checking if "ha-504633-m03" exists ...
	I0313 23:54:24.811182   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.811218   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.826166   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0313 23:54:24.826592   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.827077   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.827096   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.827395   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.827602   27784 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:54:24.827757   27784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:24.827777   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:54:24.830548   27784 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:24.830997   27784 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:24.831030   27784 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:24.831117   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:54:24.831258   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:54:24.831422   27784 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:54:24.831585   27784 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:54:24.912425   27784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:24.931036   27784 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0313 23:54:24.931068   27784 api_server.go:166] Checking apiserver status ...
	I0313 23:54:24.931136   27784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:54:24.947142   27784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0313 23:54:24.962078   27784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:54:24.962136   27784 ssh_runner.go:195] Run: ls
	I0313 23:54:24.966935   27784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:54:24.971894   27784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:54:24.971917   27784 status.go:422] ha-504633-m03 apiserver status = Running (err=<nil>)
	I0313 23:54:24.971924   27784 status.go:257] ha-504633-m03 status: &{Name:ha-504633-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:54:24.971939   27784 status.go:255] checking status of ha-504633-m04 ...
	I0313 23:54:24.972226   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.972264   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:24.987292   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0313 23:54:24.987735   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:24.988290   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:24.988315   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:24.988673   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:24.988902   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:54:24.990547   27784 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0313 23:54:24.990563   27784 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:24.990855   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:24.990888   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:25.005719   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41439
	I0313 23:54:25.006187   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:25.006709   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:25.006731   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:25.007082   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:25.007265   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0313 23:54:25.009860   27784 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:25.010263   27784 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:25.010286   27784 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:25.010407   27784 host.go:66] Checking if "ha-504633-m04" exists ...
	I0313 23:54:25.010711   27784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:25.010751   27784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:25.027191   27784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0313 23:54:25.027595   27784 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:25.028112   27784 main.go:141] libmachine: Using API Version  1
	I0313 23:54:25.028140   27784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:25.028477   27784 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:25.028691   27784 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:54:25.028887   27784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:54:25.028912   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:54:25.032615   27784 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:25.033099   27784 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:25.033124   27784 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:25.033338   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:54:25.033542   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:54:25.033714   27784 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:54:25.033876   27784 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:54:25.118434   27784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:54:25.133784   27784 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 logs -n 25: (1.482932184s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m03_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m04 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp testdata/cp-test.txt                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m03 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-504633 node stop m02 -v=7                                                     | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-504633 node start m02 -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:44:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:44:32.125716   22414 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:44:32.125833   22414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:32.125839   22414 out.go:304] Setting ErrFile to fd 2...
	I0313 23:44:32.125843   22414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:32.126008   22414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:44:32.126601   22414 out.go:298] Setting JSON to false
	I0313 23:44:32.127455   22414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1615,"bootTime":1710371857,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:44:32.127515   22414 start.go:139] virtualization: kvm guest
	I0313 23:44:32.129842   22414 out.go:177] * [ha-504633] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:44:32.131786   22414 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:44:32.131832   22414 notify.go:220] Checking for updates...
	I0313 23:44:32.134799   22414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:44:32.136125   22414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:44:32.137286   22414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.138690   22414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:44:32.140047   22414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:44:32.141601   22414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:44:32.176193   22414 out.go:177] * Using the kvm2 driver based on user configuration
	I0313 23:44:32.177334   22414 start.go:297] selected driver: kvm2
	I0313 23:44:32.177345   22414 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:44:32.177355   22414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:44:32.178044   22414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:44:32.178113   22414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:44:32.192528   22414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:44:32.192572   22414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:44:32.192767   22414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:44:32.192791   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:44:32.192797   22414 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0313 23:44:32.192805   22414 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0313 23:44:32.192864   22414 start.go:340] cluster config:
	{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0313 23:44:32.192964   22414 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:44:32.194590   22414 out.go:177] * Starting "ha-504633" primary control-plane node in "ha-504633" cluster
	I0313 23:44:32.195784   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:44:32.195820   22414 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:44:32.195829   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:44:32.195907   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:44:32.195918   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:44:32.196194   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:44:32.196212   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json: {Name:mk320919ac7140aab6984d0075187e5388514b68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:44:32.196336   22414 start.go:360] acquireMachinesLock for ha-504633: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:44:32.196362   22414 start.go:364] duration metric: took 14.269µs to acquireMachinesLock for "ha-504633"
	I0313 23:44:32.196375   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:44:32.196424   22414 start.go:125] createHost starting for "" (driver="kvm2")
	I0313 23:44:32.198067   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:44:32.198188   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:32.198234   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:32.212049   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0313 23:44:32.212441   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:32.213011   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:44:32.213036   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:32.213349   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:32.213562   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:32.213737   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:32.213863   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:44:32.213890   22414 client.go:168] LocalClient.Create starting
	I0313 23:44:32.213924   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:44:32.213961   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:44:32.213978   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:44:32.214031   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:44:32.214049   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:44:32.214063   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:44:32.214076   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:44:32.214087   22414 main.go:141] libmachine: (ha-504633) Calling .PreCreateCheck
	I0313 23:44:32.214377   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:32.214733   22414 main.go:141] libmachine: Creating machine...
	I0313 23:44:32.214752   22414 main.go:141] libmachine: (ha-504633) Calling .Create
	I0313 23:44:32.214892   22414 main.go:141] libmachine: (ha-504633) Creating KVM machine...
	I0313 23:44:32.216190   22414 main.go:141] libmachine: (ha-504633) DBG | found existing default KVM network
	I0313 23:44:32.216832   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.216715   22437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0313 23:44:32.216874   22414 main.go:141] libmachine: (ha-504633) DBG | created network xml: 
	I0313 23:44:32.216898   22414 main.go:141] libmachine: (ha-504633) DBG | <network>
	I0313 23:44:32.216923   22414 main.go:141] libmachine: (ha-504633) DBG |   <name>mk-ha-504633</name>
	I0313 23:44:32.216943   22414 main.go:141] libmachine: (ha-504633) DBG |   <dns enable='no'/>
	I0313 23:44:32.216955   22414 main.go:141] libmachine: (ha-504633) DBG |   
	I0313 23:44:32.216969   22414 main.go:141] libmachine: (ha-504633) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0313 23:44:32.216981   22414 main.go:141] libmachine: (ha-504633) DBG |     <dhcp>
	I0313 23:44:32.216991   22414 main.go:141] libmachine: (ha-504633) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0313 23:44:32.217004   22414 main.go:141] libmachine: (ha-504633) DBG |     </dhcp>
	I0313 23:44:32.217013   22414 main.go:141] libmachine: (ha-504633) DBG |   </ip>
	I0313 23:44:32.217025   22414 main.go:141] libmachine: (ha-504633) DBG |   
	I0313 23:44:32.217035   22414 main.go:141] libmachine: (ha-504633) DBG | </network>
	I0313 23:44:32.217046   22414 main.go:141] libmachine: (ha-504633) DBG | 
	I0313 23:44:32.221854   22414 main.go:141] libmachine: (ha-504633) DBG | trying to create private KVM network mk-ha-504633 192.168.39.0/24...
	I0313 23:44:32.289918   22414 main.go:141] libmachine: (ha-504633) DBG | private KVM network mk-ha-504633 192.168.39.0/24 created
	I0313 23:44:32.289947   22414 main.go:141] libmachine: (ha-504633) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 ...
	I0313 23:44:32.289992   22414 main.go:141] libmachine: (ha-504633) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:44:32.290024   22414 main.go:141] libmachine: (ha-504633) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:44:32.290045   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.289899   22437 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.512558   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.512388   22437 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa...
	I0313 23:44:32.585720   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.585595   22437 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/ha-504633.rawdisk...
	I0313 23:44:32.585766   22414 main.go:141] libmachine: (ha-504633) DBG | Writing magic tar header
	I0313 23:44:32.585776   22414 main.go:141] libmachine: (ha-504633) DBG | Writing SSH key tar header
	I0313 23:44:32.585789   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:32.585701   22437 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 ...
	I0313 23:44:32.585807   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633
	I0313 23:44:32.585877   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:44:32.585911   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:32.585924   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633 (perms=drwx------)
	I0313 23:44:32.585939   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:44:32.585954   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:44:32.585965   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:44:32.585981   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:44:32.585991   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:44:32.585998   22414 main.go:141] libmachine: (ha-504633) DBG | Checking permissions on dir: /home
	I0313 23:44:32.586011   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:44:32.586017   22414 main.go:141] libmachine: (ha-504633) DBG | Skipping /home - not owner
	I0313 23:44:32.586040   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:44:32.586063   22414 main.go:141] libmachine: (ha-504633) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:44:32.586075   22414 main.go:141] libmachine: (ha-504633) Creating domain...
	I0313 23:44:32.587118   22414 main.go:141] libmachine: (ha-504633) define libvirt domain using xml: 
	I0313 23:44:32.587140   22414 main.go:141] libmachine: (ha-504633) <domain type='kvm'>
	I0313 23:44:32.587152   22414 main.go:141] libmachine: (ha-504633)   <name>ha-504633</name>
	I0313 23:44:32.587157   22414 main.go:141] libmachine: (ha-504633)   <memory unit='MiB'>2200</memory>
	I0313 23:44:32.587162   22414 main.go:141] libmachine: (ha-504633)   <vcpu>2</vcpu>
	I0313 23:44:32.587166   22414 main.go:141] libmachine: (ha-504633)   <features>
	I0313 23:44:32.587171   22414 main.go:141] libmachine: (ha-504633)     <acpi/>
	I0313 23:44:32.587175   22414 main.go:141] libmachine: (ha-504633)     <apic/>
	I0313 23:44:32.587180   22414 main.go:141] libmachine: (ha-504633)     <pae/>
	I0313 23:44:32.587184   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587189   22414 main.go:141] libmachine: (ha-504633)   </features>
	I0313 23:44:32.587194   22414 main.go:141] libmachine: (ha-504633)   <cpu mode='host-passthrough'>
	I0313 23:44:32.587199   22414 main.go:141] libmachine: (ha-504633)   
	I0313 23:44:32.587205   22414 main.go:141] libmachine: (ha-504633)   </cpu>
	I0313 23:44:32.587210   22414 main.go:141] libmachine: (ha-504633)   <os>
	I0313 23:44:32.587218   22414 main.go:141] libmachine: (ha-504633)     <type>hvm</type>
	I0313 23:44:32.587223   22414 main.go:141] libmachine: (ha-504633)     <boot dev='cdrom'/>
	I0313 23:44:32.587227   22414 main.go:141] libmachine: (ha-504633)     <boot dev='hd'/>
	I0313 23:44:32.587233   22414 main.go:141] libmachine: (ha-504633)     <bootmenu enable='no'/>
	I0313 23:44:32.587238   22414 main.go:141] libmachine: (ha-504633)   </os>
	I0313 23:44:32.587254   22414 main.go:141] libmachine: (ha-504633)   <devices>
	I0313 23:44:32.587270   22414 main.go:141] libmachine: (ha-504633)     <disk type='file' device='cdrom'>
	I0313 23:44:32.587278   22414 main.go:141] libmachine: (ha-504633)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/boot2docker.iso'/>
	I0313 23:44:32.587282   22414 main.go:141] libmachine: (ha-504633)       <target dev='hdc' bus='scsi'/>
	I0313 23:44:32.587287   22414 main.go:141] libmachine: (ha-504633)       <readonly/>
	I0313 23:44:32.587291   22414 main.go:141] libmachine: (ha-504633)     </disk>
	I0313 23:44:32.587296   22414 main.go:141] libmachine: (ha-504633)     <disk type='file' device='disk'>
	I0313 23:44:32.587305   22414 main.go:141] libmachine: (ha-504633)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:44:32.587315   22414 main.go:141] libmachine: (ha-504633)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/ha-504633.rawdisk'/>
	I0313 23:44:32.587322   22414 main.go:141] libmachine: (ha-504633)       <target dev='hda' bus='virtio'/>
	I0313 23:44:32.587326   22414 main.go:141] libmachine: (ha-504633)     </disk>
	I0313 23:44:32.587332   22414 main.go:141] libmachine: (ha-504633)     <interface type='network'>
	I0313 23:44:32.587337   22414 main.go:141] libmachine: (ha-504633)       <source network='mk-ha-504633'/>
	I0313 23:44:32.587342   22414 main.go:141] libmachine: (ha-504633)       <model type='virtio'/>
	I0313 23:44:32.587372   22414 main.go:141] libmachine: (ha-504633)     </interface>
	I0313 23:44:32.587398   22414 main.go:141] libmachine: (ha-504633)     <interface type='network'>
	I0313 23:44:32.587410   22414 main.go:141] libmachine: (ha-504633)       <source network='default'/>
	I0313 23:44:32.587422   22414 main.go:141] libmachine: (ha-504633)       <model type='virtio'/>
	I0313 23:44:32.587432   22414 main.go:141] libmachine: (ha-504633)     </interface>
	I0313 23:44:32.587443   22414 main.go:141] libmachine: (ha-504633)     <serial type='pty'>
	I0313 23:44:32.587457   22414 main.go:141] libmachine: (ha-504633)       <target port='0'/>
	I0313 23:44:32.587467   22414 main.go:141] libmachine: (ha-504633)     </serial>
	I0313 23:44:32.587494   22414 main.go:141] libmachine: (ha-504633)     <console type='pty'>
	I0313 23:44:32.587521   22414 main.go:141] libmachine: (ha-504633)       <target type='serial' port='0'/>
	I0313 23:44:32.587540   22414 main.go:141] libmachine: (ha-504633)     </console>
	I0313 23:44:32.587551   22414 main.go:141] libmachine: (ha-504633)     <rng model='virtio'>
	I0313 23:44:32.587563   22414 main.go:141] libmachine: (ha-504633)       <backend model='random'>/dev/random</backend>
	I0313 23:44:32.587574   22414 main.go:141] libmachine: (ha-504633)     </rng>
	I0313 23:44:32.587584   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587594   22414 main.go:141] libmachine: (ha-504633)     
	I0313 23:44:32.587615   22414 main.go:141] libmachine: (ha-504633)   </devices>
	I0313 23:44:32.587632   22414 main.go:141] libmachine: (ha-504633) </domain>
	I0313 23:44:32.587642   22414 main.go:141] libmachine: (ha-504633) 
	I0313 23:44:32.591667   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:2d:9d:87 in network default
	I0313 23:44:32.592245   22414 main.go:141] libmachine: (ha-504633) Ensuring networks are active...
	I0313 23:44:32.592267   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:32.592995   22414 main.go:141] libmachine: (ha-504633) Ensuring network default is active
	I0313 23:44:32.593264   22414 main.go:141] libmachine: (ha-504633) Ensuring network mk-ha-504633 is active
	I0313 23:44:32.593831   22414 main.go:141] libmachine: (ha-504633) Getting domain xml...
	I0313 23:44:32.594434   22414 main.go:141] libmachine: (ha-504633) Creating domain...
	I0313 23:44:33.778039   22414 main.go:141] libmachine: (ha-504633) Waiting to get IP...
	I0313 23:44:33.778816   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:33.779142   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:33.779172   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:33.779123   22437 retry.go:31] will retry after 306.290275ms: waiting for machine to come up
	I0313 23:44:34.086721   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.087139   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.087180   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.087115   22437 retry.go:31] will retry after 343.376293ms: waiting for machine to come up
	I0313 23:44:34.431840   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.432327   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.432349   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.432275   22437 retry.go:31] will retry after 379.783985ms: waiting for machine to come up
	I0313 23:44:34.813983   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:34.814535   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:34.814575   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:34.814467   22437 retry.go:31] will retry after 541.31159ms: waiting for machine to come up
	I0313 23:44:35.357035   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:35.357545   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:35.357572   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:35.357504   22437 retry.go:31] will retry after 659.350133ms: waiting for machine to come up
	I0313 23:44:36.018159   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:36.018542   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:36.018557   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:36.018509   22437 retry.go:31] will retry after 654.425245ms: waiting for machine to come up
	I0313 23:44:36.674443   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:36.674941   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:36.674974   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:36.674916   22437 retry.go:31] will retry after 956.937793ms: waiting for machine to come up
	I0313 23:44:37.634017   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:37.634591   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:37.634613   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:37.634541   22437 retry.go:31] will retry after 966.617352ms: waiting for machine to come up
	I0313 23:44:38.602723   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:38.603199   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:38.603230   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:38.603140   22437 retry.go:31] will retry after 1.15163624s: waiting for machine to come up
	I0313 23:44:39.756107   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:39.756522   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:39.756558   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:39.756485   22437 retry.go:31] will retry after 2.030299917s: waiting for machine to come up
	I0313 23:44:41.789690   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:41.790051   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:41.790081   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:41.790003   22437 retry.go:31] will retry after 2.380119341s: waiting for machine to come up
	I0313 23:44:44.171371   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:44.171805   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:44.171843   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:44.171725   22437 retry.go:31] will retry after 3.5769802s: waiting for machine to come up
	I0313 23:44:47.749986   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:47.750442   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:47.750464   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:47.750386   22437 retry.go:31] will retry after 4.213108212s: waiting for machine to come up
	I0313 23:44:51.968766   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:51.969192   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find current IP address of domain ha-504633 in network mk-ha-504633
	I0313 23:44:51.969213   22414 main.go:141] libmachine: (ha-504633) DBG | I0313 23:44:51.969152   22437 retry.go:31] will retry after 3.948908595s: waiting for machine to come up
	I0313 23:44:55.919719   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.920146   22414 main.go:141] libmachine: (ha-504633) Found IP for machine: 192.168.39.31
	I0313 23:44:55.920185   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has current primary IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.920198   22414 main.go:141] libmachine: (ha-504633) Reserving static IP address...
	I0313 23:44:55.920529   22414 main.go:141] libmachine: (ha-504633) DBG | unable to find host DHCP lease matching {name: "ha-504633", mac: "52:54:00:ad:1c:0e", ip: "192.168.39.31"} in network mk-ha-504633
	I0313 23:44:55.992191   22414 main.go:141] libmachine: (ha-504633) DBG | Getting to WaitForSSH function...
	I0313 23:44:55.992224   22414 main.go:141] libmachine: (ha-504633) Reserved static IP address: 192.168.39.31
	I0313 23:44:55.992238   22414 main.go:141] libmachine: (ha-504633) Waiting for SSH to be available...
	I0313 23:44:55.995144   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.995518   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:55.995546   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:55.995692   22414 main.go:141] libmachine: (ha-504633) DBG | Using SSH client type: external
	I0313 23:44:55.995719   22414 main.go:141] libmachine: (ha-504633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa (-rw-------)
	I0313 23:44:55.995759   22414 main.go:141] libmachine: (ha-504633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:44:55.995773   22414 main.go:141] libmachine: (ha-504633) DBG | About to run SSH command:
	I0313 23:44:55.995803   22414 main.go:141] libmachine: (ha-504633) DBG | exit 0
	I0313 23:44:56.123167   22414 main.go:141] libmachine: (ha-504633) DBG | SSH cmd err, output: <nil>: 
	I0313 23:44:56.123449   22414 main.go:141] libmachine: (ha-504633) KVM machine creation complete!
	I0313 23:44:56.123731   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:56.124220   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:56.124427   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:56.124603   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:44:56.124618   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:44:56.125979   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:44:56.125995   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:44:56.126004   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:44:56.126013   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.128796   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.129264   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.129302   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.129431   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.129603   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.129753   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.129919   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.130063   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.130340   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.130353   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:44:56.242439   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:44:56.242468   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:44:56.242478   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.246986   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.247423   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.247465   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.247630   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.247840   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.248012   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.248172   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.248340   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.248489   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.248505   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:44:56.360116   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:44:56.360192   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:44:56.360203   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:44:56.360213   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.360482   22414 buildroot.go:166] provisioning hostname "ha-504633"
	I0313 23:44:56.360504   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.360700   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.363857   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.364223   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.364253   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.364329   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.364513   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.364706   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.364861   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.365034   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.365209   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.365223   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633 && echo "ha-504633" | sudo tee /etc/hostname
	I0313 23:44:56.493407   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:44:56.493435   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.496404   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.496821   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.496850   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.497015   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.497217   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.497344   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.497450   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.497609   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.497770   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.497790   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:44:56.620475   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:44:56.620502   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:44:56.620552   22414 buildroot.go:174] setting up certificates
	I0313 23:44:56.620563   22414 provision.go:84] configureAuth start
	I0313 23:44:56.620572   22414 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:44:56.620885   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:56.623726   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.624098   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.624119   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.624330   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.626384   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.626663   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.626688   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.626833   22414 provision.go:143] copyHostCerts
	I0313 23:44:56.626865   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:44:56.626904   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:44:56.626915   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:44:56.626980   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:44:56.627074   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:44:56.627093   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:44:56.627097   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:44:56.627119   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:44:56.627170   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:44:56.627188   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:44:56.627194   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:44:56.627219   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:44:56.627274   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633 san=[127.0.0.1 192.168.39.31 ha-504633 localhost minikube]
	I0313 23:44:56.742896   22414 provision.go:177] copyRemoteCerts
	I0313 23:44:56.742947   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:44:56.742969   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.745562   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.745869   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.745899   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.746104   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.746279   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.746469   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.746588   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:56.833348   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:44:56.833410   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:44:56.859577   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:44:56.859643   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0313 23:44:56.884457   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:44:56.884525   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:44:56.908850   22414 provision.go:87] duration metric: took 288.275233ms to configureAuth
	I0313 23:44:56.908877   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:44:56.909026   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:44:56.909099   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:56.911808   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.912157   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:56.912184   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:56.912367   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:56.912551   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.912698   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:56.912850   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:56.913014   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:56.913188   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:56.913209   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:44:57.196917   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:44:57.196947   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:44:57.196957   22414 main.go:141] libmachine: (ha-504633) Calling .GetURL
	I0313 23:44:57.198383   22414 main.go:141] libmachine: (ha-504633) DBG | Using libvirt version 6000000
	I0313 23:44:57.200945   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.201281   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.201301   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.201550   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:44:57.201564   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:44:57.201571   22414 client.go:171] duration metric: took 24.987671205s to LocalClient.Create
	I0313 23:44:57.201593   22414 start.go:167] duration metric: took 24.987729845s to libmachine.API.Create "ha-504633"
	I0313 23:44:57.201601   22414 start.go:293] postStartSetup for "ha-504633" (driver="kvm2")
	I0313 23:44:57.201612   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:44:57.201628   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.201841   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:44:57.201862   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.204145   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.204499   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.204528   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.204618   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.204794   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.204949   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.205072   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.293878   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:44:57.298485   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:44:57.298510   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:44:57.298589   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:44:57.298679   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:44:57.298691   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:44:57.298817   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:44:57.308679   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:44:57.334334   22414 start.go:296] duration metric: took 132.719551ms for postStartSetup
	I0313 23:44:57.334387   22414 main.go:141] libmachine: (ha-504633) Calling .GetConfigRaw
	I0313 23:44:57.335039   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:57.337483   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.337819   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.337873   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.338027   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:44:57.338216   22414 start.go:128] duration metric: took 25.141782705s to createHost
	I0313 23:44:57.338241   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.340536   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.340844   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.340881   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.340954   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.341172   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.341329   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.341514   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.341733   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:44:57.341876   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:44:57.341889   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:44:57.455835   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373497.428421798
	
	I0313 23:44:57.455856   22414 fix.go:216] guest clock: 1710373497.428421798
	I0313 23:44:57.455864   22414 fix.go:229] Guest: 2024-03-13 23:44:57.428421798 +0000 UTC Remote: 2024-03-13 23:44:57.338229619 +0000 UTC m=+25.260713200 (delta=90.192179ms)
	I0313 23:44:57.455904   22414 fix.go:200] guest clock delta is within tolerance: 90.192179ms
	I0313 23:44:57.455912   22414 start.go:83] releasing machines lock for "ha-504633", held for 25.259544059s
	I0313 23:44:57.455929   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.456222   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:57.458828   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.459263   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.459289   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.459431   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.459910   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.460077   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:44:57.460158   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:44:57.460208   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.460290   22414 ssh_runner.go:195] Run: cat /version.json
	I0313 23:44:57.460311   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:44:57.462602   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.462967   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463007   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.463031   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463152   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.463331   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.463522   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:57.463550   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:57.463568   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.463617   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:44:57.463713   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.463796   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:44:57.463930   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:44:57.464096   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:44:57.543720   22414 ssh_runner.go:195] Run: systemctl --version
	I0313 23:44:57.580818   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:44:57.742779   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:44:57.749815   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:44:57.749880   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:44:57.766967   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:44:57.766986   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:44:57.767040   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:44:57.783463   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:44:57.797445   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:44:57.797510   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:44:57.811066   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:44:57.825269   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:44:57.945932   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:44:58.085895   22414 docker.go:233] disabling docker service ...
	I0313 23:44:58.085969   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:44:58.101314   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:44:58.114944   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:44:58.261766   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:44:58.377397   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:44:58.393010   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:44:58.413240   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:44:58.413296   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.424471   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:44:58.424525   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.435516   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.446008   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:44:58.456482   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:44:58.467471   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:44:58.477162   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:44:58.477208   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:44:58.490571   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:44:58.500186   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:44:58.616180   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:44:58.757118   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:44:58.757183   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:44:58.761792   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:44:58.761845   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:44:58.765720   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:44:58.804603   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:44:58.804690   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:44:58.832457   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:44:58.864814   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:44:58.866087   22414 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:44:58.868642   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:58.868918   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:44:58.868947   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:44:58.869232   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:44:58.873403   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:44:58.886969   22414 kubeadm.go:877] updating cluster {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:44:58.887067   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:44:58.887122   22414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:44:58.922426   22414 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0313 23:44:58.922517   22414 ssh_runner.go:195] Run: which lz4
	I0313 23:44:58.927152   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0313 23:44:58.927265   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0313 23:44:58.931694   22414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0313 23:44:58.931737   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0313 23:45:00.592949   22414 crio.go:444] duration metric: took 1.665723837s to copy over tarball
	I0313 23:45:00.593015   22414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0313 23:45:02.970524   22414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.377473368s)
	I0313 23:45:02.970552   22414 crio.go:451] duration metric: took 2.377583062s to extract the tarball
	I0313 23:45:02.970559   22414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0313 23:45:03.017247   22414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:45:03.066718   22414 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:45:03.066737   22414 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:45:03.066745   22414 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0313 23:45:03.066863   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:45:03.066925   22414 ssh_runner.go:195] Run: crio config
	I0313 23:45:03.114121   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:45:03.114142   22414 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0313 23:45:03.114151   22414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:45:03.114175   22414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-504633 NodeName:ha-504633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:45:03.114321   22414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-504633"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:45:03.114344   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:45:03.114408   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:45:03.114463   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:03.125636   22414 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:45:03.125707   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0313 23:45:03.136412   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0313 23:45:03.154601   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:45:03.173826   22414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0313 23:45:03.193412   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:45:03.212431   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:45:03.216456   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:45:03.231229   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:45:03.372140   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:45:03.389338   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.31
	I0313 23:45:03.389366   22414 certs.go:194] generating shared ca certs ...
	I0313 23:45:03.389389   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.389555   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:45:03.389599   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:45:03.389608   22414 certs.go:256] generating profile certs ...
	I0313 23:45:03.389654   22414 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:45:03.389667   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt with IP's: []
	I0313 23:45:03.523525   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt ...
	I0313 23:45:03.523552   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt: {Name:mk22bec89923e7024371764bd175dc7af6d5fdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.523743   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key ...
	I0313 23:45:03.523756   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key: {Name:mk73767ffed852771d73580f3602a0d681fcd72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.523853   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea
	I0313 23:45:03.523869   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.254]
	I0313 23:45:03.692236   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea ...
	I0313 23:45:03.692267   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea: {Name:mk0792f22ba1e3bfeb549ffac82f09e7bc61c64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.692449   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea ...
	I0313 23:45:03.692465   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea: {Name:mka46a2ab563858a9ee7a9ac8ce0c41365de723d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.692566   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.505a42ea -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:45:03.692664   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.505a42ea -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:45:03.692718   22414 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:45:03.692733   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt with IP's: []
	I0313 23:45:03.821644   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt ...
	I0313 23:45:03.821673   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt: {Name:mk07f7b2b9ef33712403e38fb81f6fcd2fb94470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.821856   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key ...
	I0313 23:45:03.821871   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key: {Name:mkdbea9fe12c0064266a4011897ec2b342b77dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:03.821961   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:45:03.821980   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:45:03.821998   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:45:03.822012   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:45:03.822022   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:45:03.822034   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:45:03.822044   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:45:03.822054   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:45:03.822099   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:45:03.822132   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:45:03.822141   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:45:03.822163   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:45:03.822184   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:45:03.822207   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:45:03.822240   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:03.822268   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:03.822280   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:45:03.822292   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:45:03.822894   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:45:03.854236   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:45:03.881213   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:45:03.908312   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:45:03.936099   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0313 23:45:03.963287   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:45:03.989799   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:45:04.017256   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:45:04.043978   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:45:04.071641   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:45:04.098975   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:45:04.126675   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:45:04.144841   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:45:04.151112   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:45:04.166310   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.178399   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.178462   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:04.185037   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:45:04.212789   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:45:04.225872   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.230723   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.230799   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:45:04.236814   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:45:04.250520   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:45:04.261446   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.266063   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.266103   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:45:04.271954   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:45:04.283442   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:45:04.287981   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:45:04.288041   22414 kubeadm.go:391] StartCluster: {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:45:04.288116   22414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:45:04.288165   22414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:45:04.327317   22414 cri.go:89] found id: ""
	I0313 23:45:04.327400   22414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0313 23:45:04.338063   22414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0313 23:45:04.348309   22414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0313 23:45:04.358143   22414 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0313 23:45:04.358164   22414 kubeadm.go:156] found existing configuration files:
	
	I0313 23:45:04.358206   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0313 23:45:04.367644   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0313 23:45:04.367739   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0313 23:45:04.377487   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0313 23:45:04.386536   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0313 23:45:04.386584   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0313 23:45:04.396399   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0313 23:45:04.406760   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0313 23:45:04.406837   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0313 23:45:04.417253   22414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0313 23:45:04.426848   22414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0313 23:45:04.426893   22414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0313 23:45:04.436723   22414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0313 23:45:04.687640   22414 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0313 23:45:16.757895   22414 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0313 23:45:16.757969   22414 kubeadm.go:309] [preflight] Running pre-flight checks
	I0313 23:45:16.758047   22414 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0313 23:45:16.758127   22414 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0313 23:45:16.758210   22414 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0313 23:45:16.758307   22414 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0313 23:45:16.759942   22414 out.go:204]   - Generating certificates and keys ...
	I0313 23:45:16.760040   22414 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0313 23:45:16.760114   22414 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0313 23:45:16.760214   22414 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0313 23:45:16.760320   22414 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0313 23:45:16.760409   22414 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0313 23:45:16.760482   22414 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0313 23:45:16.760569   22414 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0313 23:45:16.760749   22414 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-504633 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I0313 23:45:16.760824   22414 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0313 23:45:16.760941   22414 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-504633 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I0313 23:45:16.761037   22414 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0313 23:45:16.761115   22414 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0313 23:45:16.761187   22414 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0313 23:45:16.761270   22414 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0313 23:45:16.761354   22414 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0313 23:45:16.761428   22414 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0313 23:45:16.761557   22414 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0313 23:45:16.761648   22414 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0313 23:45:16.761748   22414 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0313 23:45:16.761838   22414 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0313 23:45:16.763193   22414 out.go:204]   - Booting up control plane ...
	I0313 23:45:16.763286   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0313 23:45:16.763347   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0313 23:45:16.763399   22414 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0313 23:45:16.763480   22414 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0313 23:45:16.763558   22414 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0313 23:45:16.763590   22414 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0313 23:45:16.763710   22414 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0313 23:45:16.763768   22414 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.108666 seconds
	I0313 23:45:16.763860   22414 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0313 23:45:16.763959   22414 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0313 23:45:16.764015   22414 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0313 23:45:16.764153   22414 kubeadm.go:309] [mark-control-plane] Marking the node ha-504633 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0313 23:45:16.764199   22414 kubeadm.go:309] [bootstrap-token] Using token: setsml.ffo6177g1a5h04fn
	I0313 23:45:16.765598   22414 out.go:204]   - Configuring RBAC rules ...
	I0313 23:45:16.765698   22414 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0313 23:45:16.765764   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0313 23:45:16.765872   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0313 23:45:16.765975   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0313 23:45:16.766061   22414 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0313 23:45:16.766133   22414 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0313 23:45:16.766226   22414 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0313 23:45:16.766261   22414 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0313 23:45:16.766301   22414 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0313 23:45:16.766306   22414 kubeadm.go:309] 
	I0313 23:45:16.766372   22414 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0313 23:45:16.766387   22414 kubeadm.go:309] 
	I0313 23:45:16.766461   22414 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0313 23:45:16.766470   22414 kubeadm.go:309] 
	I0313 23:45:16.766510   22414 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0313 23:45:16.766578   22414 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0313 23:45:16.766648   22414 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0313 23:45:16.766658   22414 kubeadm.go:309] 
	I0313 23:45:16.766708   22414 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0313 23:45:16.766714   22414 kubeadm.go:309] 
	I0313 23:45:16.766768   22414 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0313 23:45:16.766774   22414 kubeadm.go:309] 
	I0313 23:45:16.766835   22414 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0313 23:45:16.766939   22414 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0313 23:45:16.767041   22414 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0313 23:45:16.767058   22414 kubeadm.go:309] 
	I0313 23:45:16.767170   22414 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0313 23:45:16.767289   22414 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0313 23:45:16.767302   22414 kubeadm.go:309] 
	I0313 23:45:16.767375   22414 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token setsml.ffo6177g1a5h04fn \
	I0313 23:45:16.767468   22414 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c \
	I0313 23:45:16.767487   22414 kubeadm.go:309] 	--control-plane 
	I0313 23:45:16.767494   22414 kubeadm.go:309] 
	I0313 23:45:16.767575   22414 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0313 23:45:16.767582   22414 kubeadm.go:309] 
	I0313 23:45:16.767648   22414 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token setsml.ffo6177g1a5h04fn \
	I0313 23:45:16.767758   22414 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c 
	I0313 23:45:16.767777   22414 cni.go:84] Creating CNI manager for ""
	I0313 23:45:16.767785   22414 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0313 23:45:16.769361   22414 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0313 23:45:16.770584   22414 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0313 23:45:16.800031   22414 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0313 23:45:16.800058   22414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0313 23:45:16.858285   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0313 23:45:18.072367   22414 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.214046702s)
	I0313 23:45:18.072402   22414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0313 23:45:18.072513   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:18.072523   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633 minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=true
	I0313 23:45:18.094143   22414 ops.go:34] apiserver oom_adj: -16
	I0313 23:45:18.211662   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:18.712676   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:19.212460   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:19.712365   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:20.212342   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:20.712345   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:21.211885   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:21.712221   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:22.212036   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:22.712503   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:23.212718   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:23.712380   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:24.212002   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:24.712095   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:25.212706   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:25.711819   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:26.211934   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:26.712616   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:27.212316   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:27.712353   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:28.212015   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:45:28.301575   22414 kubeadm.go:1106] duration metric: took 10.229136912s to wait for elevateKubeSystemPrivileges
	W0313 23:45:28.301614   22414 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0313 23:45:28.301620   22414 kubeadm.go:393] duration metric: took 24.013585791s to StartCluster
	I0313 23:45:28.301644   22414 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:28.301730   22414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:45:28.302366   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:28.302599   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0313 23:45:28.302617   22414 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0313 23:45:28.302659   22414 addons.go:69] Setting storage-provisioner=true in profile "ha-504633"
	I0313 23:45:28.302689   22414 addons.go:234] Setting addon storage-provisioner=true in "ha-504633"
	I0313 23:45:28.302717   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:28.302601   22414 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:45:28.302737   22414 addons.go:69] Setting default-storageclass=true in profile "ha-504633"
	I0313 23:45:28.302783   22414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-504633"
	I0313 23:45:28.302738   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:45:28.302868   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:28.303137   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.303167   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.303186   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.303225   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.318189   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0313 23:45:28.318523   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0313 23:45:28.318746   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.318872   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.319335   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.319351   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.319483   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.319515   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.319711   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.319876   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.320111   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.320302   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.320346   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.322500   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:45:28.322890   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0313 23:45:28.323456   22414 cert_rotation.go:137] Starting client certificate rotation controller
	I0313 23:45:28.323664   22414 addons.go:234] Setting addon default-storageclass=true in "ha-504633"
	I0313 23:45:28.323708   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:28.324095   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.324141   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.336407   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37349
	I0313 23:45:28.336828   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.337371   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.337399   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.337759   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.338060   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.339181   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0313 23:45:28.339571   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.339905   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:28.340031   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.340057   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.342220   22414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0313 23:45:28.340411   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.343730   22414 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:45:28.343755   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0313 23:45:28.343774   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:28.344781   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:28.344820   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:28.346887   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.347368   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:28.347405   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.347592   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:28.347791   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:28.347935   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:28.348057   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:28.360391   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0313 23:45:28.360829   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:28.361252   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:28.361278   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:28.361645   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:28.361820   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:28.363624   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:28.363850   22414 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0313 23:45:28.363869   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0313 23:45:28.363886   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:28.366432   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.366836   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:28.366860   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:28.367105   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:28.367310   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:28.367472   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:28.367632   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:28.442095   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0313 23:45:28.496536   22414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:45:28.562265   22414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0313 23:45:28.999817   22414 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0313 23:45:29.337048   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337080   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337057   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337136   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337366   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337378   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337386   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337393   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337406   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337440   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337462   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.337470   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.337588   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337601   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337711   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.337732   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.337748   22414 main.go:141] libmachine: (ha-504633) DBG | Closing plugin on server side
	I0313 23:45:29.337852   22414 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0313 23:45:29.337865   22414 round_trippers.go:469] Request Headers:
	I0313 23:45:29.337876   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:45:29.337887   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:45:29.348574   22414 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0313 23:45:29.349129   22414 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0313 23:45:29.349144   22414 round_trippers.go:469] Request Headers:
	I0313 23:45:29.349155   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:45:29.349161   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:45:29.349167   22414 round_trippers.go:473]     Content-Type: application/json
	I0313 23:45:29.351890   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:45:29.352094   22414 main.go:141] libmachine: Making call to close driver server
	I0313 23:45:29.352108   22414 main.go:141] libmachine: (ha-504633) Calling .Close
	I0313 23:45:29.352378   22414 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:45:29.352397   22414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:45:29.354171   22414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0313 23:45:29.355441   22414 addons.go:505] duration metric: took 1.052819857s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0313 23:45:29.355475   22414 start.go:245] waiting for cluster config update ...
	I0313 23:45:29.355487   22414 start.go:254] writing updated cluster config ...
	I0313 23:45:29.356919   22414 out.go:177] 
	I0313 23:45:29.358206   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:29.358266   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:29.359776   22414 out.go:177] * Starting "ha-504633-m02" control-plane node in "ha-504633" cluster
	I0313 23:45:29.360982   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:45:29.361010   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:45:29.361103   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:45:29.361119   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:45:29.361214   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:29.361431   22414 start.go:360] acquireMachinesLock for ha-504633-m02: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:45:29.361489   22414 start.go:364] duration metric: took 33.897µs to acquireMachinesLock for "ha-504633-m02"
	I0313 23:45:29.361510   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:45:29.361603   22414 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0313 23:45:29.364235   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:45:29.364321   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:29.364353   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:29.378585   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0313 23:45:29.379096   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:29.379628   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:29.379656   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:29.379951   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:29.380134   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:29.380265   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:29.380459   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:45:29.380501   22414 client.go:168] LocalClient.Create starting
	I0313 23:45:29.380566   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:45:29.380611   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:45:29.380631   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:45:29.380677   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:45:29.380700   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:45:29.380710   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:45:29.380726   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:45:29.380735   22414 main.go:141] libmachine: (ha-504633-m02) Calling .PreCreateCheck
	I0313 23:45:29.380897   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:29.381338   22414 main.go:141] libmachine: Creating machine...
	I0313 23:45:29.381353   22414 main.go:141] libmachine: (ha-504633-m02) Calling .Create
	I0313 23:45:29.381489   22414 main.go:141] libmachine: (ha-504633-m02) Creating KVM machine...
	I0313 23:45:29.382723   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found existing default KVM network
	I0313 23:45:29.382860   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found existing private KVM network mk-ha-504633
	I0313 23:45:29.383024   22414 main.go:141] libmachine: (ha-504633-m02) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 ...
	I0313 23:45:29.383049   22414 main.go:141] libmachine: (ha-504633-m02) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:45:29.383124   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.383015   22745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:45:29.383220   22414 main.go:141] libmachine: (ha-504633-m02) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:45:29.603731   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.603540   22745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa...
	I0313 23:45:29.716976   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.716867   22745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/ha-504633-m02.rawdisk...
	I0313 23:45:29.717033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Writing magic tar header
	I0313 23:45:29.717049   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Writing SSH key tar header
	I0313 23:45:29.717061   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:29.717001   22745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 ...
	I0313 23:45:29.717163   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02
	I0313 23:45:29.717204   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:45:29.717222   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02 (perms=drwx------)
	I0313 23:45:29.717237   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:45:29.717264   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:45:29.717282   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:45:29.717296   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:45:29.717308   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:45:29.717323   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Checking permissions on dir: /home
	I0313 23:45:29.717334   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Skipping /home - not owner
	I0313 23:45:29.717352   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:45:29.717369   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:45:29.717380   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:45:29.717389   22414 main.go:141] libmachine: (ha-504633-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:45:29.717402   22414 main.go:141] libmachine: (ha-504633-m02) Creating domain...
	I0313 23:45:29.718283   22414 main.go:141] libmachine: (ha-504633-m02) define libvirt domain using xml: 
	I0313 23:45:29.718304   22414 main.go:141] libmachine: (ha-504633-m02) <domain type='kvm'>
	I0313 23:45:29.718315   22414 main.go:141] libmachine: (ha-504633-m02)   <name>ha-504633-m02</name>
	I0313 23:45:29.718323   22414 main.go:141] libmachine: (ha-504633-m02)   <memory unit='MiB'>2200</memory>
	I0313 23:45:29.718337   22414 main.go:141] libmachine: (ha-504633-m02)   <vcpu>2</vcpu>
	I0313 23:45:29.718348   22414 main.go:141] libmachine: (ha-504633-m02)   <features>
	I0313 23:45:29.718360   22414 main.go:141] libmachine: (ha-504633-m02)     <acpi/>
	I0313 23:45:29.718370   22414 main.go:141] libmachine: (ha-504633-m02)     <apic/>
	I0313 23:45:29.718398   22414 main.go:141] libmachine: (ha-504633-m02)     <pae/>
	I0313 23:45:29.718419   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718433   22414 main.go:141] libmachine: (ha-504633-m02)   </features>
	I0313 23:45:29.718445   22414 main.go:141] libmachine: (ha-504633-m02)   <cpu mode='host-passthrough'>
	I0313 23:45:29.718457   22414 main.go:141] libmachine: (ha-504633-m02)   
	I0313 23:45:29.718467   22414 main.go:141] libmachine: (ha-504633-m02)   </cpu>
	I0313 23:45:29.718481   22414 main.go:141] libmachine: (ha-504633-m02)   <os>
	I0313 23:45:29.718497   22414 main.go:141] libmachine: (ha-504633-m02)     <type>hvm</type>
	I0313 23:45:29.718510   22414 main.go:141] libmachine: (ha-504633-m02)     <boot dev='cdrom'/>
	I0313 23:45:29.718521   22414 main.go:141] libmachine: (ha-504633-m02)     <boot dev='hd'/>
	I0313 23:45:29.718534   22414 main.go:141] libmachine: (ha-504633-m02)     <bootmenu enable='no'/>
	I0313 23:45:29.718546   22414 main.go:141] libmachine: (ha-504633-m02)   </os>
	I0313 23:45:29.718557   22414 main.go:141] libmachine: (ha-504633-m02)   <devices>
	I0313 23:45:29.718567   22414 main.go:141] libmachine: (ha-504633-m02)     <disk type='file' device='cdrom'>
	I0313 23:45:29.718600   22414 main.go:141] libmachine: (ha-504633-m02)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/boot2docker.iso'/>
	I0313 23:45:29.718625   22414 main.go:141] libmachine: (ha-504633-m02)       <target dev='hdc' bus='scsi'/>
	I0313 23:45:29.718638   22414 main.go:141] libmachine: (ha-504633-m02)       <readonly/>
	I0313 23:45:29.718647   22414 main.go:141] libmachine: (ha-504633-m02)     </disk>
	I0313 23:45:29.718659   22414 main.go:141] libmachine: (ha-504633-m02)     <disk type='file' device='disk'>
	I0313 23:45:29.718670   22414 main.go:141] libmachine: (ha-504633-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:45:29.718686   22414 main.go:141] libmachine: (ha-504633-m02)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/ha-504633-m02.rawdisk'/>
	I0313 23:45:29.718699   22414 main.go:141] libmachine: (ha-504633-m02)       <target dev='hda' bus='virtio'/>
	I0313 23:45:29.718710   22414 main.go:141] libmachine: (ha-504633-m02)     </disk>
	I0313 23:45:29.718721   22414 main.go:141] libmachine: (ha-504633-m02)     <interface type='network'>
	I0313 23:45:29.718734   22414 main.go:141] libmachine: (ha-504633-m02)       <source network='mk-ha-504633'/>
	I0313 23:45:29.718744   22414 main.go:141] libmachine: (ha-504633-m02)       <model type='virtio'/>
	I0313 23:45:29.718752   22414 main.go:141] libmachine: (ha-504633-m02)     </interface>
	I0313 23:45:29.718781   22414 main.go:141] libmachine: (ha-504633-m02)     <interface type='network'>
	I0313 23:45:29.718793   22414 main.go:141] libmachine: (ha-504633-m02)       <source network='default'/>
	I0313 23:45:29.718819   22414 main.go:141] libmachine: (ha-504633-m02)       <model type='virtio'/>
	I0313 23:45:29.718830   22414 main.go:141] libmachine: (ha-504633-m02)     </interface>
	I0313 23:45:29.718837   22414 main.go:141] libmachine: (ha-504633-m02)     <serial type='pty'>
	I0313 23:45:29.718847   22414 main.go:141] libmachine: (ha-504633-m02)       <target port='0'/>
	I0313 23:45:29.718858   22414 main.go:141] libmachine: (ha-504633-m02)     </serial>
	I0313 23:45:29.718864   22414 main.go:141] libmachine: (ha-504633-m02)     <console type='pty'>
	I0313 23:45:29.718876   22414 main.go:141] libmachine: (ha-504633-m02)       <target type='serial' port='0'/>
	I0313 23:45:29.718886   22414 main.go:141] libmachine: (ha-504633-m02)     </console>
	I0313 23:45:29.718897   22414 main.go:141] libmachine: (ha-504633-m02)     <rng model='virtio'>
	I0313 23:45:29.718910   22414 main.go:141] libmachine: (ha-504633-m02)       <backend model='random'>/dev/random</backend>
	I0313 23:45:29.718920   22414 main.go:141] libmachine: (ha-504633-m02)     </rng>
	I0313 23:45:29.718928   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718936   22414 main.go:141] libmachine: (ha-504633-m02)     
	I0313 23:45:29.718943   22414 main.go:141] libmachine: (ha-504633-m02)   </devices>
	I0313 23:45:29.718958   22414 main.go:141] libmachine: (ha-504633-m02) </domain>
	I0313 23:45:29.718972   22414 main.go:141] libmachine: (ha-504633-m02) 
	I0313 23:45:29.725679   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:1b:93:1d in network default
	I0313 23:45:29.726416   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring networks are active...
	I0313 23:45:29.726445   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:29.727361   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring network default is active
	I0313 23:45:29.727707   22414 main.go:141] libmachine: (ha-504633-m02) Ensuring network mk-ha-504633 is active
	I0313 23:45:29.728118   22414 main.go:141] libmachine: (ha-504633-m02) Getting domain xml...
	I0313 23:45:29.728982   22414 main.go:141] libmachine: (ha-504633-m02) Creating domain...
	I0313 23:45:30.923815   22414 main.go:141] libmachine: (ha-504633-m02) Waiting to get IP...
	I0313 23:45:30.924814   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:30.925187   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:30.925242   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:30.925181   22745 retry.go:31] will retry after 238.667554ms: waiting for machine to come up
	I0313 23:45:31.165691   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.166088   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.166122   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.166033   22745 retry.go:31] will retry after 269.695339ms: waiting for machine to come up
	I0313 23:45:31.437724   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.438322   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.438349   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.438262   22745 retry.go:31] will retry after 332.684451ms: waiting for machine to come up
	I0313 23:45:31.772916   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:31.773484   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:31.773528   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:31.773464   22745 retry.go:31] will retry after 528.114207ms: waiting for machine to come up
	I0313 23:45:32.303074   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:32.303578   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:32.303606   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:32.303529   22745 retry.go:31] will retry after 535.466395ms: waiting for machine to come up
	I0313 23:45:32.840325   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:32.840800   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:32.840825   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:32.840766   22745 retry.go:31] will retry after 815.467153ms: waiting for machine to come up
	I0313 23:45:33.657736   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:33.658193   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:33.658222   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:33.658155   22745 retry.go:31] will retry after 1.127123157s: waiting for machine to come up
	I0313 23:45:34.786490   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:34.786971   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:34.786997   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:34.786924   22745 retry.go:31] will retry after 1.006211279s: waiting for machine to come up
	I0313 23:45:35.794544   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:35.795021   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:35.795048   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:35.794982   22745 retry.go:31] will retry after 1.316637901s: waiting for machine to come up
	I0313 23:45:37.112803   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:37.113413   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:37.113436   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:37.113364   22745 retry.go:31] will retry after 1.641628067s: waiting for machine to come up
	I0313 23:45:38.758555   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:38.759025   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:38.759054   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:38.758971   22745 retry.go:31] will retry after 2.686943951s: waiting for machine to come up
	I0313 23:45:41.447850   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:41.448244   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:41.448267   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:41.448220   22745 retry.go:31] will retry after 3.433942106s: waiting for machine to come up
	I0313 23:45:44.883689   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:44.884110   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:44.884182   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:44.884065   22745 retry.go:31] will retry after 2.774438793s: waiting for machine to come up
	I0313 23:45:47.661899   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:47.662308   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find current IP address of domain ha-504633-m02 in network mk-ha-504633
	I0313 23:45:47.662325   22414 main.go:141] libmachine: (ha-504633-m02) DBG | I0313 23:45:47.662284   22745 retry.go:31] will retry after 4.804089976s: waiting for machine to come up
	I0313 23:45:52.469740   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.470264   22414 main.go:141] libmachine: (ha-504633-m02) Found IP for machine: 192.168.39.47
	I0313 23:45:52.470291   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has current primary IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.470300   22414 main.go:141] libmachine: (ha-504633-m02) Reserving static IP address...
	I0313 23:45:52.470665   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find host DHCP lease matching {name: "ha-504633-m02", mac: "52:54:00:56:27:e8", ip: "192.168.39.47"} in network mk-ha-504633
	I0313 23:45:52.542352   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Getting to WaitForSSH function...
	I0313 23:45:52.542383   22414 main.go:141] libmachine: (ha-504633-m02) Reserved static IP address: 192.168.39.47
	I0313 23:45:52.542397   22414 main.go:141] libmachine: (ha-504633-m02) Waiting for SSH to be available...
	I0313 23:45:52.544842   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:52.545119   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633
	I0313 23:45:52.545145   22414 main.go:141] libmachine: (ha-504633-m02) DBG | unable to find defined IP address of network mk-ha-504633 interface with MAC address 52:54:00:56:27:e8
	I0313 23:45:52.545264   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH client type: external
	I0313 23:45:52.545292   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa (-rw-------)
	I0313 23:45:52.545351   22414 main.go:141] libmachine: (ha-504633-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:45:52.545369   22414 main.go:141] libmachine: (ha-504633-m02) DBG | About to run SSH command:
	I0313 23:45:52.545384   22414 main.go:141] libmachine: (ha-504633-m02) DBG | exit 0
	I0313 23:45:52.548978   22414 main.go:141] libmachine: (ha-504633-m02) DBG | SSH cmd err, output: exit status 255: 
	I0313 23:45:52.548999   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0313 23:45:52.549010   22414 main.go:141] libmachine: (ha-504633-m02) DBG | command : exit 0
	I0313 23:45:52.549018   22414 main.go:141] libmachine: (ha-504633-m02) DBG | err     : exit status 255
	I0313 23:45:52.549028   22414 main.go:141] libmachine: (ha-504633-m02) DBG | output  : 
	I0313 23:45:55.550013   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Getting to WaitForSSH function...
	I0313 23:45:55.552387   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.552726   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.552754   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.552858   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH client type: external
	I0313 23:45:55.552886   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa (-rw-------)
	I0313 23:45:55.552915   22414 main.go:141] libmachine: (ha-504633-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:45:55.552931   22414 main.go:141] libmachine: (ha-504633-m02) DBG | About to run SSH command:
	I0313 23:45:55.552943   22414 main.go:141] libmachine: (ha-504633-m02) DBG | exit 0
	I0313 23:45:55.675034   22414 main.go:141] libmachine: (ha-504633-m02) DBG | SSH cmd err, output: <nil>: 
	I0313 23:45:55.675284   22414 main.go:141] libmachine: (ha-504633-m02) KVM machine creation complete!
	I0313 23:45:55.675599   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:55.676128   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:55.676317   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:55.676471   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:45:55.676484   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0313 23:45:55.677718   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:45:55.677734   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:45:55.677742   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:45:55.677766   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.680082   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.680479   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.680504   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.680679   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.680884   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.681098   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.681273   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.681495   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.681754   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.681768   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:45:55.782454   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:45:55.782482   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:45:55.782493   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.785256   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.785696   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.785729   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.785941   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.786136   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.786318   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.786495   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.786643   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.786829   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.786840   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:45:55.887744   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:45:55.887802   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:45:55.887809   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:45:55.887821   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:55.888053   22414 buildroot.go:166] provisioning hostname "ha-504633-m02"
	I0313 23:45:55.888083   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:55.888227   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:55.890727   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.891153   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:55.891191   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:55.891330   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:55.891546   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.891723   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:55.891920   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:55.892163   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:55.892333   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:55.892352   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633-m02 && echo "ha-504633-m02" | sudo tee /etc/hostname
	I0313 23:45:56.010147   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633-m02
	
	I0313 23:45:56.010190   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.013048   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.013390   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.013416   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.013576   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.013765   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.013986   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.014146   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.014351   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.014510   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.014526   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:45:56.128696   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:45:56.128724   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:45:56.128740   22414 buildroot.go:174] setting up certificates
	I0313 23:45:56.128751   22414 provision.go:84] configureAuth start
	I0313 23:45:56.128759   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetMachineName
	I0313 23:45:56.129076   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.132033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.132490   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.132514   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.132662   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.135115   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.135472   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.135500   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.135645   22414 provision.go:143] copyHostCerts
	I0313 23:45:56.135691   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:45:56.135734   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:45:56.135743   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:45:56.135812   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:45:56.135891   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:45:56.135908   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:45:56.135914   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:45:56.135936   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:45:56.135986   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:45:56.136002   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:45:56.136008   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:45:56.136027   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:45:56.136071   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633-m02 san=[127.0.0.1 192.168.39.47 ha-504633-m02 localhost minikube]
	I0313 23:45:56.258650   22414 provision.go:177] copyRemoteCerts
	I0313 23:45:56.258701   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:45:56.258721   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.261365   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.261837   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.261866   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.262046   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.262301   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.262483   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.262611   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.342093   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:45:56.342157   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:45:56.368693   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:45:56.368758   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:45:56.394391   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:45:56.394467   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0313 23:45:56.421029   22414 provision.go:87] duration metric: took 292.265613ms to configureAuth
	I0313 23:45:56.421058   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:45:56.421284   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:56.421372   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.423816   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.424184   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.424232   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.424344   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.424557   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.424713   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.424824   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.424987   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.425185   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.425203   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:45:56.696023   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:45:56.696055   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:45:56.696063   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetURL
	I0313 23:45:56.697304   22414 main.go:141] libmachine: (ha-504633-m02) DBG | Using libvirt version 6000000
	I0313 23:45:56.699333   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.699763   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.699800   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.699938   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:45:56.699956   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:45:56.699963   22414 client.go:171] duration metric: took 27.319451348s to LocalClient.Create
	I0313 23:45:56.699987   22414 start.go:167] duration metric: took 27.319533471s to libmachine.API.Create "ha-504633"
	I0313 23:45:56.700000   22414 start.go:293] postStartSetup for "ha-504633-m02" (driver="kvm2")
	I0313 23:45:56.700014   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:45:56.700034   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.700297   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:45:56.700317   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.702924   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.703363   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.703390   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.703602   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.703803   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.703990   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.704152   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.791654   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:45:56.795967   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:45:56.795988   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:45:56.796046   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:45:56.796116   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:45:56.796127   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:45:56.796210   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:45:56.807866   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:56.832492   22414 start.go:296] duration metric: took 132.481015ms for postStartSetup
	I0313 23:45:56.832538   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetConfigRaw
	I0313 23:45:56.833113   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.836449   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.836977   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.837009   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.837753   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:45:56.838006   22414 start.go:128] duration metric: took 27.47639171s to createHost
	I0313 23:45:56.838042   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.841352   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.841776   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.841819   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.842116   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.842351   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.842578   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.842840   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.843046   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:45:56.843213   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0313 23:45:56.843225   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:45:56.944146   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373556.932699776
	
	I0313 23:45:56.944171   22414 fix.go:216] guest clock: 1710373556.932699776
	I0313 23:45:56.944179   22414 fix.go:229] Guest: 2024-03-13 23:45:56.932699776 +0000 UTC Remote: 2024-03-13 23:45:56.838022897 +0000 UTC m=+84.760506472 (delta=94.676879ms)
	I0313 23:45:56.944193   22414 fix.go:200] guest clock delta is within tolerance: 94.676879ms
	I0313 23:45:56.944198   22414 start.go:83] releasing machines lock for "ha-504633-m02", held for 27.582698737s
	I0313 23:45:56.944222   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.944477   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:56.947033   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.947343   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.947368   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.949909   22414 out.go:177] * Found network options:
	I0313 23:45:56.951494   22414 out.go:177]   - NO_PROXY=192.168.39.31
	W0313 23:45:56.952816   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:45:56.952844   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953409   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953577   22414 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0313 23:45:56.953657   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:45:56.953684   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	W0313 23:45:56.953775   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:45:56.953866   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:45:56.953890   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0313 23:45:56.956221   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956342   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956585   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.956609   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956809   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.956843   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:56.956867   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:56.956990   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.957026   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0313 23:45:56.957168   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.957172   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0313 23:45:56.957355   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:56.957375   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0313 23:45:56.957516   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0313 23:45:57.205902   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:45:57.213432   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:45:57.213491   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:45:57.231091   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:45:57.231116   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:45:57.231196   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:45:57.253572   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:45:57.271181   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:45:57.271239   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:45:57.288471   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:45:57.303377   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:45:57.426602   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:45:57.570060   22414 docker.go:233] disabling docker service ...
	I0313 23:45:57.570122   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:45:57.585409   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:45:57.599737   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:45:57.744130   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:45:57.879672   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:45:57.895152   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:45:57.917190   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:45:57.917246   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.927977   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:45:57.928037   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.939210   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.950885   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:45:57.961971   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:45:57.972987   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:45:57.983426   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:45:57.983487   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:45:57.998585   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:45:58.009366   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:45:58.139276   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:45:58.281865   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:45:58.281928   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:45:58.287730   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:45:58.287785   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:45:58.291722   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:45:58.337522   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:45:58.337611   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:45:58.367229   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:45:58.398548   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:45:58.400312   22414 out.go:177]   - env NO_PROXY=192.168.39.31
	I0313 23:45:58.401870   22414 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0313 23:45:58.404617   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:58.404971   22414 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:45:44 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0313 23:45:58.405011   22414 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0313 23:45:58.405222   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:45:58.409573   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:45:58.423481   22414 mustload.go:65] Loading cluster: ha-504633
	I0313 23:45:58.423707   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:45:58.423978   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:58.424031   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:58.439929   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0313 23:45:58.440387   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:58.440818   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:58.440830   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:58.441179   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:58.441397   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:45:58.442888   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:45:58.443281   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:45:58.443325   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:45:58.457760   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0313 23:45:58.458138   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:45:58.458582   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:45:58.458603   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:45:58.458964   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:45:58.459181   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:45:58.459357   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.47
	I0313 23:45:58.459368   22414 certs.go:194] generating shared ca certs ...
	I0313 23:45:58.459385   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.459499   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:45:58.459543   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:45:58.459557   22414 certs.go:256] generating profile certs ...
	I0313 23:45:58.459658   22414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:45:58.459693   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047
	I0313 23:45:58.459713   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.254]
	I0313 23:45:58.628806   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 ...
	I0313 23:45:58.628834   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047: {Name:mkd54a5480bd97529ebe7020139c2848ba457963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.629051   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047 ...
	I0313 23:45:58.629073   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047: {Name:mk72b13edd0ebac2393b4342e658100af58f8806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:45:58.629179   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.11fed047 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:45:58.629311   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.11fed047 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:45:58.629440   22414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:45:58.629461   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:45:58.629482   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:45:58.629497   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:45:58.629512   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:45:58.629533   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:45:58.629552   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:45:58.629568   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:45:58.629585   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:45:58.629662   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:45:58.629709   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:45:58.629723   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:45:58.629759   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:45:58.629791   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:45:58.629822   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:45:58.629877   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:45:58.629919   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:45:58.629939   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:45:58.629958   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:58.629999   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:45:58.632843   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:58.633272   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:45:58.633300   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:45:58.633450   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:45:58.633621   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:45:58.633826   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:45:58.633930   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:45:58.711289   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0313 23:45:58.716912   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0313 23:45:58.730305   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0313 23:45:58.735062   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0313 23:45:58.747692   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0313 23:45:58.752517   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0313 23:45:58.764245   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0313 23:45:58.768381   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0313 23:45:58.779388   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0313 23:45:58.787223   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0313 23:45:58.798951   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0313 23:45:58.803650   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0313 23:45:58.815944   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:45:58.842948   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:45:58.868043   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:45:58.892434   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:45:58.916983   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0313 23:45:58.944221   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:45:58.970408   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:45:58.995916   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:45:59.023224   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:45:59.049254   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:45:59.075856   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:45:59.102308   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0313 23:45:59.120294   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0313 23:45:59.138109   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0313 23:45:59.155731   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0313 23:45:59.173865   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0313 23:45:59.193376   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0313 23:45:59.212013   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0313 23:45:59.230226   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:45:59.236022   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:45:59.248252   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.252999   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.253051   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:45:59.258801   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:45:59.270271   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:45:59.281980   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.287105   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.287186   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:45:59.293179   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:45:59.305730   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:45:59.317958   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.323143   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.323207   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:45:59.329074   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:45:59.341607   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:45:59.346065   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:45:59.346121   22414 kubeadm.go:928] updating node {m02 192.168.39.47 8443 v1.28.4 crio true true} ...
	I0313 23:45:59.346266   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:45:59.346311   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:45:59.346349   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:45:59.346408   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:59.358406   22414 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0313 23:45:59.358470   22414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0313 23:45:59.369388   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0313 23:45:59.369416   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:45:59.369482   22414 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0313 23:45:59.369530   22414 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0313 23:45:59.369489   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:45:59.374180   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0313 23:45:59.374206   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0313 23:46:32.285539   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:46:32.285615   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:46:32.290952   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0313 23:46:32.290985   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0313 23:47:11.199243   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:47:11.217845   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:47:11.217934   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:47:11.222619   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0313 23:47:11.222649   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0313 23:47:11.680337   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0313 23:47:11.690162   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0313 23:47:11.708141   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:47:11.725505   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:47:11.743502   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:47:11.747926   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:47:11.761386   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:47:11.887854   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:47:11.905728   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:47:11.906198   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:47:11.906249   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:47:11.921485   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0313 23:47:11.921982   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:47:11.922510   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:47:11.922540   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:47:11.922874   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:47:11.923041   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:47:11.923241   22414 start.go:316] joinCluster: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:47:11.923367   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0313 23:47:11.923391   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:47:11.926804   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:47:11.927193   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:47:11.927221   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:47:11.927349   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:47:11.927534   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:47:11.927701   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:47:11.927852   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:47:12.099969   22414 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:47:12.100021   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3aq05d.mnsuf0499qv3j76i --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m02 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I0313 23:47:51.475828   22414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3aq05d.mnsuf0499qv3j76i --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m02 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (39.375779052s)
	I0313 23:47:51.475861   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0313 23:47:51.985920   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633-m02 minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=false
	I0313 23:47:52.121205   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-504633-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0313 23:47:52.237528   22414 start.go:318] duration metric: took 40.314283326s to joinCluster
	I0313 23:47:52.237605   22414 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:47:52.239662   22414 out.go:177] * Verifying Kubernetes components...
	I0313 23:47:52.237861   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:47:52.241375   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:47:52.457661   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:47:52.477311   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:47:52.477567   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0313 23:47:52.477627   22414 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.31:8443
	I0313 23:47:52.477839   22414 node_ready.go:35] waiting up to 6m0s for node "ha-504633-m02" to be "Ready" ...
	I0313 23:47:52.477935   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:52.477946   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:52.477957   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:52.477964   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:52.493113   22414 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0313 23:47:52.978332   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:52.978352   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:52.978360   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:52.978365   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:52.983229   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:53.478574   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:53.478595   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:53.478607   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:53.478611   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:53.483227   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:53.978501   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:53.978524   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:53.978533   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:53.978538   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:53.982322   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:54.478972   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:54.478996   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:54.479006   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:54.479012   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:54.483583   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:54.484503   22414 node_ready.go:53] node "ha-504633-m02" has status "Ready":"False"
	I0313 23:47:54.978515   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:54.978537   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:54.978545   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:54.978549   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:54.983933   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:55.478164   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:55.478189   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:55.478198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:55.478204   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:55.482562   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:55.979021   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:55.979049   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:55.979061   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:55.979065   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:55.982296   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:56.478032   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:56.478058   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:56.478069   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:56.478073   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:56.481921   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:56.978113   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:56.978135   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:56.978143   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:56.978146   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:56.983349   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:56.983967   22414 node_ready.go:53] node "ha-504633-m02" has status "Ready":"False"
	I0313 23:47:57.478829   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:57.478861   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:57.478872   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:57.478877   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:57.483587   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:57.978341   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:57.978362   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:57.978372   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:57.978378   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:57.982655   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:58.478375   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:58.478397   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.478407   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.478414   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.482079   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.978321   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:58.978344   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.978351   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.978355   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.982057   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.982809   22414 node_ready.go:49] node "ha-504633-m02" has status "Ready":"True"
	I0313 23:47:58.982826   22414 node_ready.go:38] duration metric: took 6.504971274s for node "ha-504633-m02" to be "Ready" ...
	I0313 23:47:58.982836   22414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:47:58.982917   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:47:58.982928   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.982935   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.982938   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.988207   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:47:58.994146   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:58.994211   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dbkfv
	I0313 23:47:58.994219   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.994226   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:58.994239   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.998051   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:58.999095   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:58.999110   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:58.999117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:58.999122   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.003197   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:59.003782   22414 pod_ready.go:92] pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.003797   22414 pod_ready.go:81] duration metric: took 9.630585ms for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.003805   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.003864   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hh2kw
	I0313 23:47:59.003874   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.003880   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.003885   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.007817   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.008320   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:59.008334   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.008340   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.008346   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.011206   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.011793   22414 pod_ready.go:92] pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.011809   22414 pod_ready.go:81] duration metric: took 7.998065ms for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.011820   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.011873   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633
	I0313 23:47:59.011881   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.011888   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.011894   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.014563   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.015010   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:47:59.015023   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.015030   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.015036   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.017377   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:47:59.017771   22414 pod_ready.go:92] pod "etcd-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:47:59.017785   22414 pod_ready.go:81] duration metric: took 5.95535ms for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.017792   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:47:59.017832   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:47:59.017840   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.017847   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.017852   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.020971   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.021669   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:59.021683   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.021689   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.021693   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.024788   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:47:59.518843   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:47:59.518866   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.518874   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.518878   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.523108   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:47:59.523813   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:47:59.523827   22414 round_trippers.go:469] Request Headers:
	I0313 23:47:59.523834   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:47:59.523837   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:47:59.526949   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.018411   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:00.018433   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.018440   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.018444   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.022266   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.023044   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:00.023061   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.023069   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.023072   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.026130   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.518041   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:00.518063   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.518071   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.518075   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.522084   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:00.522900   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:00.522916   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:00.522925   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:00.522929   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:00.525916   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:01.018526   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:01.018555   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.018566   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.018571   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.022603   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:01.023282   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:01.023302   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.023312   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.023315   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.026348   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:01.026975   22414 pod_ready.go:102] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"False"
	I0313 23:48:01.518286   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:01.518320   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.518328   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.518332   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.522224   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:01.522904   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:01.522917   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:01.522927   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:01.522932   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:01.526296   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.018940   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:02.018962   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.018971   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.018976   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.022847   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.023554   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:02.023568   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.023575   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.023582   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.026974   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:02.518917   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:02.518948   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.518957   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.518962   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.523360   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:02.524105   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:02.524123   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:02.524133   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:02.524139   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:02.527021   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.017971   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:48:03.017993   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.018000   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.018006   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.021978   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.022782   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.022798   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.022809   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.022814   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.027600   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:03.028102   22414 pod_ready.go:92] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.028121   22414 pod_ready.go:81] duration metric: took 4.010321625s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.028140   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.028203   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:48:03.028213   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.028224   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.028231   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.031188   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.031749   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.031764   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.031773   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.031778   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.034583   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.035266   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.035285   22414 pod_ready.go:81] duration metric: took 7.136593ms for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.035298   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.035359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:48:03.035370   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.035379   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.035388   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.038372   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:48:03.038960   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.038976   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.038985   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.038988   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.042434   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.042966   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.042985   22414 pod_ready.go:81] duration metric: took 7.679023ms for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.042998   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.179376   22414 request.go:629] Waited for 136.309846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:48:03.179447   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:48:03.179452   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.179504   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.179513   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.183195   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.379289   22414 request.go:629] Waited for 195.403269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.379350   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:03.379358   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.379368   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.379376   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.383468   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:03.384269   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.384287   22414 pod_ready.go:81] duration metric: took 341.281587ms for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.384297   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.579273   22414 request.go:629] Waited for 194.904156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:48:03.579324   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:48:03.579330   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.579338   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.579342   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.583258   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.778723   22414 request.go:629] Waited for 194.4133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.778826   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:03.778842   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.778852   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.778861   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.782658   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:03.783399   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:03.783419   22414 pod_ready.go:81] duration metric: took 399.114651ms for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.783432   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:03.978697   22414 request.go:629] Waited for 195.188215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:48:03.978751   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:48:03.978756   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:03.978777   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:03.978783   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:03.983524   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.178832   22414 request.go:629] Waited for 194.433461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:04.178904   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:04.178910   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.178918   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.178925   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.182760   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:04.183322   22414 pod_ready.go:92] pod "kube-proxy-4s9t5" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.183341   22414 pod_ready.go:81] duration metric: took 399.902997ms for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.183351   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.379418   22414 request.go:629] Waited for 196.006939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:48:04.379486   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:48:04.379491   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.379498   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.379502   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.383452   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:04.578789   22414 request.go:629] Waited for 194.592749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.578870   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.578881   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.578888   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.578891   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.582983   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.583718   22414 pod_ready.go:92] pod "kube-proxy-j56zl" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.583738   22414 pod_ready.go:81] duration metric: took 400.380755ms for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.583751   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.779022   22414 request.go:629] Waited for 195.183559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:48:04.779098   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:48:04.779105   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.779117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.779129   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.783580   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:04.978825   22414 request.go:629] Waited for 194.38583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.978877   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:48:04.978882   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:04.978889   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:04.978894   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:04.984336   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:48:04.985132   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:04.985153   22414 pod_ready.go:81] duration metric: took 401.395449ms for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:04.985163   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:05.179219   22414 request.go:629] Waited for 193.979517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:48:05.179281   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:48:05.179288   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.179296   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.179302   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.182936   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:05.378971   22414 request.go:629] Waited for 195.391408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:05.379022   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:48:05.379028   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.379034   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.379039   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.383483   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:05.384088   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:48:05.384107   22414 pod_ready.go:81] duration metric: took 398.938177ms for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:48:05.384118   22414 pod_ready.go:38] duration metric: took 6.401255852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:48:05.384133   22414 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:48:05.384189   22414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:48:05.402480   22414 api_server.go:72] duration metric: took 13.164836481s to wait for apiserver process to appear ...
	I0313 23:48:05.402502   22414 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:48:05.402519   22414 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I0313 23:48:05.409852   22414 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I0313 23:48:05.409925   22414 round_trippers.go:463] GET https://192.168.39.31:8443/version
	I0313 23:48:05.409931   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.409939   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.409949   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.411250   22414 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0313 23:48:05.411399   22414 api_server.go:141] control plane version: v1.28.4
	I0313 23:48:05.411426   22414 api_server.go:131] duration metric: took 8.915989ms to wait for apiserver health ...
	I0313 23:48:05.411437   22414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:48:05.578895   22414 request.go:629] Waited for 167.356892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.578949   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.578959   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.578970   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.578983   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.585342   22414 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0313 23:48:05.590324   22414 system_pods.go:59] 17 kube-system pods found
	I0313 23:48:05.590350   22414 system_pods.go:61] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:48:05.590354   22414 system_pods.go:61] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:48:05.590358   22414 system_pods.go:61] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:48:05.590362   22414 system_pods.go:61] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:48:05.590364   22414 system_pods.go:61] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:48:05.590367   22414 system_pods.go:61] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:48:05.590370   22414 system_pods.go:61] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:48:05.590372   22414 system_pods.go:61] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:48:05.590376   22414 system_pods.go:61] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:48:05.590379   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:48:05.590382   22414 system_pods.go:61] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:48:05.590384   22414 system_pods.go:61] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:48:05.590387   22414 system_pods.go:61] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:48:05.590390   22414 system_pods.go:61] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:48:05.590396   22414 system_pods.go:61] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.590403   22414 system_pods.go:61] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.590408   22414 system_pods.go:61] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:48:05.590416   22414 system_pods.go:74] duration metric: took 178.969124ms to wait for pod list to return data ...
	I0313 23:48:05.590427   22414 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:48:05.778863   22414 request.go:629] Waited for 188.346037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:48:05.778945   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:48:05.778951   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.778958   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.778962   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.783286   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:48:05.783501   22414 default_sa.go:45] found service account: "default"
	I0313 23:48:05.783520   22414 default_sa.go:55] duration metric: took 193.086181ms for default service account to be created ...
	I0313 23:48:05.783531   22414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:48:05.979041   22414 request.go:629] Waited for 195.427717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.979166   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:48:05.979188   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:05.979198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:05.979205   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:05.985102   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:48:05.989345   22414 system_pods.go:86] 17 kube-system pods found
	I0313 23:48:05.989379   22414 system_pods.go:89] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:48:05.989388   22414 system_pods.go:89] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:48:05.989395   22414 system_pods.go:89] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:48:05.989402   22414 system_pods.go:89] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:48:05.989407   22414 system_pods.go:89] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:48:05.989413   22414 system_pods.go:89] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:48:05.989420   22414 system_pods.go:89] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:48:05.989430   22414 system_pods.go:89] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:48:05.989437   22414 system_pods.go:89] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:48:05.989450   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:48:05.989456   22414 system_pods.go:89] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:48:05.989465   22414 system_pods.go:89] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:48:05.989474   22414 system_pods.go:89] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:48:05.989483   22414 system_pods.go:89] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:48:05.989496   22414 system_pods.go:89] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.989507   22414 system_pods.go:89] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:48:05.989516   22414 system_pods.go:89] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:48:05.989524   22414 system_pods.go:126] duration metric: took 205.987083ms to wait for k8s-apps to be running ...
	I0313 23:48:05.989533   22414 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:48:05.989583   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:48:06.006976   22414 system_svc.go:56] duration metric: took 17.436264ms WaitForService to wait for kubelet
	I0313 23:48:06.007006   22414 kubeadm.go:576] duration metric: took 13.76935953s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:48:06.007036   22414 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:48:06.178382   22414 request.go:629] Waited for 171.274898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes
	I0313 23:48:06.178443   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes
	I0313 23:48:06.178448   22414 round_trippers.go:469] Request Headers:
	I0313 23:48:06.178455   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:48:06.178461   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:48:06.182313   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:48:06.183249   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:48:06.183270   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:48:06.183284   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:48:06.183289   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:48:06.183294   22414 node_conditions.go:105] duration metric: took 176.25042ms to run NodePressure ...
	I0313 23:48:06.183307   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:48:06.183338   22414 start.go:254] writing updated cluster config ...
	I0313 23:48:06.185457   22414 out.go:177] 
	I0313 23:48:06.187324   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:06.187462   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:06.189637   22414 out.go:177] * Starting "ha-504633-m03" control-plane node in "ha-504633" cluster
	I0313 23:48:06.191396   22414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:48:06.191420   22414 cache.go:56] Caching tarball of preloaded images
	I0313 23:48:06.191518   22414 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:48:06.191529   22414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:48:06.191626   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:06.191804   22414 start.go:360] acquireMachinesLock for ha-504633-m03: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:48:06.191845   22414 start.go:364] duration metric: took 22.662µs to acquireMachinesLock for "ha-504633-m03"
	I0313 23:48:06.191858   22414 start.go:93] Provisioning new machine with config: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:48:06.191972   22414 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0313 23:48:06.193917   22414 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0313 23:48:06.193999   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:06.194032   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:06.208696   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0313 23:48:06.209197   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:06.209657   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:06.209682   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:06.210020   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:06.210225   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:06.210434   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:06.210629   22414 start.go:159] libmachine.API.Create for "ha-504633" (driver="kvm2")
	I0313 23:48:06.210662   22414 client.go:168] LocalClient.Create starting
	I0313 23:48:06.210699   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0313 23:48:06.210746   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:48:06.210780   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:48:06.210839   22414 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0313 23:48:06.210859   22414 main.go:141] libmachine: Decoding PEM data...
	I0313 23:48:06.210871   22414 main.go:141] libmachine: Parsing certificate...
	I0313 23:48:06.210888   22414 main.go:141] libmachine: Running pre-create checks...
	I0313 23:48:06.210895   22414 main.go:141] libmachine: (ha-504633-m03) Calling .PreCreateCheck
	I0313 23:48:06.211118   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:06.211520   22414 main.go:141] libmachine: Creating machine...
	I0313 23:48:06.211533   22414 main.go:141] libmachine: (ha-504633-m03) Calling .Create
	I0313 23:48:06.211662   22414 main.go:141] libmachine: (ha-504633-m03) Creating KVM machine...
	I0313 23:48:06.213229   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found existing default KVM network
	I0313 23:48:06.213321   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found existing private KVM network mk-ha-504633
	I0313 23:48:06.213492   22414 main.go:141] libmachine: (ha-504633-m03) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 ...
	I0313 23:48:06.213532   22414 main.go:141] libmachine: (ha-504633-m03) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:48:06.213634   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.213484   23288 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:48:06.213784   22414 main.go:141] libmachine: (ha-504633-m03) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:48:06.428369   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.428224   23288 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa...
	I0313 23:48:06.650011   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.649902   23288 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/ha-504633-m03.rawdisk...
	I0313 23:48:06.650044   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Writing magic tar header
	I0313 23:48:06.650058   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Writing SSH key tar header
	I0313 23:48:06.650149   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:06.650055   23288 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 ...
	I0313 23:48:06.650201   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03
	I0313 23:48:06.650213   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0313 23:48:06.650231   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03 (perms=drwx------)
	I0313 23:48:06.650251   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:48:06.650271   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:48:06.650287   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0313 23:48:06.650301   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0313 23:48:06.650313   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:48:06.650322   22414 main.go:141] libmachine: (ha-504633-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:48:06.650333   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0313 23:48:06.650346   22414 main.go:141] libmachine: (ha-504633-m03) Creating domain...
	I0313 23:48:06.650366   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:48:06.650374   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:48:06.650380   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Checking permissions on dir: /home
	I0313 23:48:06.650386   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Skipping /home - not owner
	I0313 23:48:06.651437   22414 main.go:141] libmachine: (ha-504633-m03) define libvirt domain using xml: 
	I0313 23:48:06.651458   22414 main.go:141] libmachine: (ha-504633-m03) <domain type='kvm'>
	I0313 23:48:06.651469   22414 main.go:141] libmachine: (ha-504633-m03)   <name>ha-504633-m03</name>
	I0313 23:48:06.651477   22414 main.go:141] libmachine: (ha-504633-m03)   <memory unit='MiB'>2200</memory>
	I0313 23:48:06.651486   22414 main.go:141] libmachine: (ha-504633-m03)   <vcpu>2</vcpu>
	I0313 23:48:06.651497   22414 main.go:141] libmachine: (ha-504633-m03)   <features>
	I0313 23:48:06.651505   22414 main.go:141] libmachine: (ha-504633-m03)     <acpi/>
	I0313 23:48:06.651513   22414 main.go:141] libmachine: (ha-504633-m03)     <apic/>
	I0313 23:48:06.651518   22414 main.go:141] libmachine: (ha-504633-m03)     <pae/>
	I0313 23:48:06.651523   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.651529   22414 main.go:141] libmachine: (ha-504633-m03)   </features>
	I0313 23:48:06.651540   22414 main.go:141] libmachine: (ha-504633-m03)   <cpu mode='host-passthrough'>
	I0313 23:48:06.651559   22414 main.go:141] libmachine: (ha-504633-m03)   
	I0313 23:48:06.651582   22414 main.go:141] libmachine: (ha-504633-m03)   </cpu>
	I0313 23:48:06.651591   22414 main.go:141] libmachine: (ha-504633-m03)   <os>
	I0313 23:48:06.651598   22414 main.go:141] libmachine: (ha-504633-m03)     <type>hvm</type>
	I0313 23:48:06.651607   22414 main.go:141] libmachine: (ha-504633-m03)     <boot dev='cdrom'/>
	I0313 23:48:06.651612   22414 main.go:141] libmachine: (ha-504633-m03)     <boot dev='hd'/>
	I0313 23:48:06.651621   22414 main.go:141] libmachine: (ha-504633-m03)     <bootmenu enable='no'/>
	I0313 23:48:06.651625   22414 main.go:141] libmachine: (ha-504633-m03)   </os>
	I0313 23:48:06.651630   22414 main.go:141] libmachine: (ha-504633-m03)   <devices>
	I0313 23:48:06.651638   22414 main.go:141] libmachine: (ha-504633-m03)     <disk type='file' device='cdrom'>
	I0313 23:48:06.651676   22414 main.go:141] libmachine: (ha-504633-m03)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/boot2docker.iso'/>
	I0313 23:48:06.651710   22414 main.go:141] libmachine: (ha-504633-m03)       <target dev='hdc' bus='scsi'/>
	I0313 23:48:06.651723   22414 main.go:141] libmachine: (ha-504633-m03)       <readonly/>
	I0313 23:48:06.651734   22414 main.go:141] libmachine: (ha-504633-m03)     </disk>
	I0313 23:48:06.651748   22414 main.go:141] libmachine: (ha-504633-m03)     <disk type='file' device='disk'>
	I0313 23:48:06.651761   22414 main.go:141] libmachine: (ha-504633-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:48:06.651778   22414 main.go:141] libmachine: (ha-504633-m03)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/ha-504633-m03.rawdisk'/>
	I0313 23:48:06.651806   22414 main.go:141] libmachine: (ha-504633-m03)       <target dev='hda' bus='virtio'/>
	I0313 23:48:06.651818   22414 main.go:141] libmachine: (ha-504633-m03)     </disk>
	I0313 23:48:06.651829   22414 main.go:141] libmachine: (ha-504633-m03)     <interface type='network'>
	I0313 23:48:06.651841   22414 main.go:141] libmachine: (ha-504633-m03)       <source network='mk-ha-504633'/>
	I0313 23:48:06.651852   22414 main.go:141] libmachine: (ha-504633-m03)       <model type='virtio'/>
	I0313 23:48:06.651862   22414 main.go:141] libmachine: (ha-504633-m03)     </interface>
	I0313 23:48:06.651877   22414 main.go:141] libmachine: (ha-504633-m03)     <interface type='network'>
	I0313 23:48:06.651886   22414 main.go:141] libmachine: (ha-504633-m03)       <source network='default'/>
	I0313 23:48:06.651894   22414 main.go:141] libmachine: (ha-504633-m03)       <model type='virtio'/>
	I0313 23:48:06.651904   22414 main.go:141] libmachine: (ha-504633-m03)     </interface>
	I0313 23:48:06.651909   22414 main.go:141] libmachine: (ha-504633-m03)     <serial type='pty'>
	I0313 23:48:06.651917   22414 main.go:141] libmachine: (ha-504633-m03)       <target port='0'/>
	I0313 23:48:06.651922   22414 main.go:141] libmachine: (ha-504633-m03)     </serial>
	I0313 23:48:06.651930   22414 main.go:141] libmachine: (ha-504633-m03)     <console type='pty'>
	I0313 23:48:06.651941   22414 main.go:141] libmachine: (ha-504633-m03)       <target type='serial' port='0'/>
	I0313 23:48:06.651960   22414 main.go:141] libmachine: (ha-504633-m03)     </console>
	I0313 23:48:06.651973   22414 main.go:141] libmachine: (ha-504633-m03)     <rng model='virtio'>
	I0313 23:48:06.651984   22414 main.go:141] libmachine: (ha-504633-m03)       <backend model='random'>/dev/random</backend>
	I0313 23:48:06.651994   22414 main.go:141] libmachine: (ha-504633-m03)     </rng>
	I0313 23:48:06.652001   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.652011   22414 main.go:141] libmachine: (ha-504633-m03)     
	I0313 23:48:06.652017   22414 main.go:141] libmachine: (ha-504633-m03)   </devices>
	I0313 23:48:06.652023   22414 main.go:141] libmachine: (ha-504633-m03) </domain>
	I0313 23:48:06.652030   22414 main.go:141] libmachine: (ha-504633-m03) 
	I0313 23:48:06.660477   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:39:86:a2 in network default
	I0313 23:48:06.661268   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring networks are active...
	I0313 23:48:06.661289   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:06.662120   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring network default is active
	I0313 23:48:06.662585   22414 main.go:141] libmachine: (ha-504633-m03) Ensuring network mk-ha-504633 is active
	I0313 23:48:06.663022   22414 main.go:141] libmachine: (ha-504633-m03) Getting domain xml...
	I0313 23:48:06.663865   22414 main.go:141] libmachine: (ha-504633-m03) Creating domain...
	I0313 23:48:07.899152   22414 main.go:141] libmachine: (ha-504633-m03) Waiting to get IP...
	I0313 23:48:07.900091   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:07.900537   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:07.900579   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:07.900503   23288 retry.go:31] will retry after 279.429776ms: waiting for machine to come up
	I0313 23:48:08.182127   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.182510   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.182539   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.182475   23288 retry.go:31] will retry after 280.916957ms: waiting for machine to come up
	I0313 23:48:08.464904   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.465438   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.465465   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.465381   23288 retry.go:31] will retry after 355.252581ms: waiting for machine to come up
	I0313 23:48:08.822123   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:08.822598   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:08.822625   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:08.822548   23288 retry.go:31] will retry after 578.530778ms: waiting for machine to come up
	I0313 23:48:09.402293   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:09.402759   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:09.402809   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:09.402706   23288 retry.go:31] will retry after 626.205833ms: waiting for machine to come up
	I0313 23:48:10.030354   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:10.030847   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:10.030875   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:10.030777   23288 retry.go:31] will retry after 661.699082ms: waiting for machine to come up
	I0313 23:48:10.694180   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:10.694639   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:10.694660   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:10.694599   23288 retry.go:31] will retry after 1.125196766s: waiting for machine to come up
	I0313 23:48:11.821217   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:11.821725   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:11.821747   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:11.821691   23288 retry.go:31] will retry after 1.11519518s: waiting for machine to come up
	I0313 23:48:12.939126   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:12.939562   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:12.939579   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:12.939541   23288 retry.go:31] will retry after 1.82498896s: waiting for machine to come up
	I0313 23:48:14.766124   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:14.766589   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:14.766645   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:14.766569   23288 retry.go:31] will retry after 2.004419745s: waiting for machine to come up
	I0313 23:48:16.772997   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:16.773447   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:16.773473   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:16.773411   23288 retry.go:31] will retry after 2.159705549s: waiting for machine to come up
	I0313 23:48:18.935766   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:18.936247   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:18.936272   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:18.936208   23288 retry.go:31] will retry after 3.427169274s: waiting for machine to come up
	I0313 23:48:22.364471   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:22.364909   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:22.364934   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:22.364874   23288 retry.go:31] will retry after 3.920707034s: waiting for machine to come up
	I0313 23:48:26.287337   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:26.287749   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find current IP address of domain ha-504633-m03 in network mk-ha-504633
	I0313 23:48:26.287775   22414 main.go:141] libmachine: (ha-504633-m03) DBG | I0313 23:48:26.287693   23288 retry.go:31] will retry after 4.612548047s: waiting for machine to come up
	I0313 23:48:30.905349   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.905815   22414 main.go:141] libmachine: (ha-504633-m03) Found IP for machine: 192.168.39.156
	I0313 23:48:30.905841   22414 main.go:141] libmachine: (ha-504633-m03) Reserving static IP address...
	I0313 23:48:30.905854   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has current primary IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.906225   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find host DHCP lease matching {name: "ha-504633-m03", mac: "52:54:00:94:1d:f9", ip: "192.168.39.156"} in network mk-ha-504633
	I0313 23:48:30.977479   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Getting to WaitForSSH function...
	I0313 23:48:30.977510   22414 main.go:141] libmachine: (ha-504633-m03) Reserved static IP address: 192.168.39.156
	I0313 23:48:30.977524   22414 main.go:141] libmachine: (ha-504633-m03) Waiting for SSH to be available...
	I0313 23:48:30.980054   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:30.980415   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633
	I0313 23:48:30.980444   22414 main.go:141] libmachine: (ha-504633-m03) DBG | unable to find defined IP address of network mk-ha-504633 interface with MAC address 52:54:00:94:1d:f9
	I0313 23:48:30.980645   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH client type: external
	I0313 23:48:30.980672   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa (-rw-------)
	I0313 23:48:30.980704   22414 main.go:141] libmachine: (ha-504633-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:48:30.980722   22414 main.go:141] libmachine: (ha-504633-m03) DBG | About to run SSH command:
	I0313 23:48:30.980738   22414 main.go:141] libmachine: (ha-504633-m03) DBG | exit 0
	I0313 23:48:30.984225   22414 main.go:141] libmachine: (ha-504633-m03) DBG | SSH cmd err, output: exit status 255: 
	I0313 23:48:30.984243   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0313 23:48:30.984250   22414 main.go:141] libmachine: (ha-504633-m03) DBG | command : exit 0
	I0313 23:48:30.984256   22414 main.go:141] libmachine: (ha-504633-m03) DBG | err     : exit status 255
	I0313 23:48:30.984263   22414 main.go:141] libmachine: (ha-504633-m03) DBG | output  : 
	I0313 23:48:33.986686   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Getting to WaitForSSH function...
	I0313 23:48:33.988995   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:33.989367   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:33.989403   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:33.989468   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH client type: external
	I0313 23:48:33.989491   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa (-rw-------)
	I0313 23:48:33.989530   22414 main.go:141] libmachine: (ha-504633-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:48:33.989544   22414 main.go:141] libmachine: (ha-504633-m03) DBG | About to run SSH command:
	I0313 23:48:33.989570   22414 main.go:141] libmachine: (ha-504633-m03) DBG | exit 0
	I0313 23:48:34.110725   22414 main.go:141] libmachine: (ha-504633-m03) DBG | SSH cmd err, output: <nil>: 
	I0313 23:48:34.110984   22414 main.go:141] libmachine: (ha-504633-m03) KVM machine creation complete!
	I0313 23:48:34.111290   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:34.111849   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:34.112070   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:34.112307   22414 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:48:34.112326   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:48:34.113582   22414 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:48:34.113600   22414 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:48:34.113607   22414 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:48:34.113620   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.116063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.116433   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.116458   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.116615   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.116779   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.116936   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.117079   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.117246   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.117476   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.117488   22414 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:48:34.218175   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:48:34.218198   22414 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:48:34.218205   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.221129   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.221446   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.221511   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.221654   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.221904   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.222101   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.222250   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.222398   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.222579   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.222612   22414 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:48:34.323667   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:48:34.323723   22414 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:48:34.323730   22414 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:48:34.323737   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.324017   22414 buildroot.go:166] provisioning hostname "ha-504633-m03"
	I0313 23:48:34.324049   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.324258   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.327094   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.327541   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.327569   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.327681   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.327866   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.327985   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.328128   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.328253   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.328402   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.328414   22414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633-m03 && echo "ha-504633-m03" | sudo tee /etc/hostname
	I0313 23:48:34.442416   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633-m03
	
	I0313 23:48:34.442441   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.445489   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.445976   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.446007   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.446179   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.446435   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.446629   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.446806   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.446969   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.447153   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.447177   22414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:48:34.556883   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:48:34.556914   22414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:48:34.556933   22414 buildroot.go:174] setting up certificates
	I0313 23:48:34.556946   22414 provision.go:84] configureAuth start
	I0313 23:48:34.556963   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetMachineName
	I0313 23:48:34.557273   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:34.559957   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.560418   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.560447   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.560666   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.563247   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.563586   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.563609   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.563781   22414 provision.go:143] copyHostCerts
	I0313 23:48:34.563810   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:48:34.563847   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:48:34.563858   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:48:34.563925   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:48:34.563994   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:48:34.564011   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:48:34.564017   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:48:34.564045   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:48:34.564086   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:48:34.564102   22414 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:48:34.564108   22414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:48:34.564127   22414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:48:34.564173   22414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633-m03 san=[127.0.0.1 192.168.39.156 ha-504633-m03 localhost minikube]
	I0313 23:48:34.695002   22414 provision.go:177] copyRemoteCerts
	I0313 23:48:34.695054   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:48:34.695074   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.697643   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.698030   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.698057   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.698237   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.698424   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.698626   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.698817   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:34.783808   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:48:34.783882   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:48:34.814591   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:48:34.814657   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:48:34.844611   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:48:34.844686   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0313 23:48:34.871720   22414 provision.go:87] duration metric: took 314.757689ms to configureAuth
	I0313 23:48:34.871745   22414 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:48:34.872007   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:34.872103   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:34.874669   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.875068   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:34.875097   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:34.875342   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:34.875517   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.875648   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:34.875751   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:34.875899   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:34.876092   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:34.876115   22414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:48:35.140993   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:48:35.141022   22414 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:48:35.141039   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetURL
	I0313 23:48:35.142371   22414 main.go:141] libmachine: (ha-504633-m03) DBG | Using libvirt version 6000000
	I0313 23:48:35.144667   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.145063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.145091   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.145250   22414 main.go:141] libmachine: Docker is up and running!
	I0313 23:48:35.145262   22414 main.go:141] libmachine: Reticulating splines...
	I0313 23:48:35.145268   22414 client.go:171] duration metric: took 28.934599353s to LocalClient.Create
	I0313 23:48:35.145294   22414 start.go:167] duration metric: took 28.934664266s to libmachine.API.Create "ha-504633"
	I0313 23:48:35.145307   22414 start.go:293] postStartSetup for "ha-504633-m03" (driver="kvm2")
	I0313 23:48:35.145321   22414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:48:35.145337   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.145561   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:48:35.145620   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.147933   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.148269   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.148292   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.148437   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.148631   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.148815   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.148976   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.230518   22414 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:48:35.235076   22414 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:48:35.235107   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:48:35.235173   22414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:48:35.235273   22414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:48:35.235286   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:48:35.235390   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:48:35.246856   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:48:35.272754   22414 start.go:296] duration metric: took 127.430693ms for postStartSetup
	I0313 23:48:35.272817   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetConfigRaw
	I0313 23:48:35.273395   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:35.276063   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.276434   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.276466   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.276817   22414 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:48:35.277013   22414 start.go:128] duration metric: took 29.085030265s to createHost
	I0313 23:48:35.277035   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.279688   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.280086   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.280115   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.280307   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.280544   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.280732   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.280910   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.281091   22414 main.go:141] libmachine: Using SSH client type: native
	I0313 23:48:35.281314   22414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0313 23:48:35.281329   22414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:48:35.383994   22414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710373715.361768120
	
	I0313 23:48:35.384023   22414 fix.go:216] guest clock: 1710373715.361768120
	I0313 23:48:35.384035   22414 fix.go:229] Guest: 2024-03-13 23:48:35.36176812 +0000 UTC Remote: 2024-03-13 23:48:35.277024662 +0000 UTC m=+243.199508230 (delta=84.743458ms)
	I0313 23:48:35.384056   22414 fix.go:200] guest clock delta is within tolerance: 84.743458ms
	I0313 23:48:35.384064   22414 start.go:83] releasing machines lock for "ha-504633-m03", held for 29.192212918s
	I0313 23:48:35.384118   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.384400   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:35.386936   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.387364   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.387390   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.389774   22414 out.go:177] * Found network options:
	I0313 23:48:35.391527   22414 out.go:177]   - NO_PROXY=192.168.39.31,192.168.39.47
	W0313 23:48:35.393085   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	W0313 23:48:35.393107   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:48:35.393123   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.393768   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.393962   22414 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:48:35.394068   22414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:48:35.394117   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	W0313 23:48:35.394195   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	W0313 23:48:35.394222   22414 proxy.go:119] fail to check proxy env: Error ip not in block
	I0313 23:48:35.394290   22414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:48:35.394315   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:48:35.397114   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397367   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397523   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.397553   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.397705   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.397835   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:35.397862   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:35.398013   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:48:35.398051   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.398151   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.398197   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:48:35.398312   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.398346   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:48:35.398477   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:48:35.637929   22414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:48:35.644363   22414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:48:35.644422   22414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:48:35.661140   22414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:48:35.661163   22414 start.go:494] detecting cgroup driver to use...
	I0313 23:48:35.661232   22414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:48:35.679366   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:48:35.694561   22414 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:48:35.694624   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:48:35.709117   22414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:48:35.723163   22414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:48:35.842898   22414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:48:35.991544   22414 docker.go:233] disabling docker service ...
	I0313 23:48:35.991629   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:48:36.009122   22414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:48:36.024083   22414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:48:36.165785   22414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:48:36.316911   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:48:36.332008   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:48:36.353156   22414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:48:36.353221   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.364075   22414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:48:36.364132   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.374950   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.385632   22414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:48:36.396708   22414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:48:36.408619   22414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:48:36.420158   22414 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:48:36.420219   22414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:48:36.436036   22414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:48:36.447006   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:48:36.580531   22414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:48:36.725522   22414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:48:36.725596   22414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:48:36.731189   22414 start.go:562] Will wait 60s for crictl version
	I0313 23:48:36.731246   22414 ssh_runner.go:195] Run: which crictl
	I0313 23:48:36.735480   22414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:48:36.778545   22414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:48:36.778639   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:48:36.811946   22414 ssh_runner.go:195] Run: crio --version
	I0313 23:48:36.848008   22414 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:48:36.849251   22414 out.go:177]   - env NO_PROXY=192.168.39.31
	I0313 23:48:36.850377   22414 out.go:177]   - env NO_PROXY=192.168.39.31,192.168.39.47
	I0313 23:48:36.851374   22414 main.go:141] libmachine: (ha-504633-m03) Calling .GetIP
	I0313 23:48:36.853713   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:36.854031   22414 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:48:36.854053   22414 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:48:36.854252   22414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:48:36.858843   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:48:36.872293   22414 mustload.go:65] Loading cluster: ha-504633
	I0313 23:48:36.872560   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:48:36.872819   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:36.872857   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:36.888475   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0313 23:48:36.888949   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:36.889419   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:36.889439   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:36.889739   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:36.889931   22414 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:48:36.891566   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:48:36.891854   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:36.891896   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:36.906024   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0313 23:48:36.906476   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:36.906898   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:36.906919   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:36.907234   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:36.907397   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:48:36.907559   22414 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.156
	I0313 23:48:36.907571   22414 certs.go:194] generating shared ca certs ...
	I0313 23:48:36.907586   22414 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:36.907699   22414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:48:36.907733   22414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:48:36.907742   22414 certs.go:256] generating profile certs ...
	I0313 23:48:36.907805   22414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:48:36.907828   22414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a
	I0313 23:48:36.907853   22414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.156 192.168.39.254]
	I0313 23:48:37.191402   22414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a ...
	I0313 23:48:37.191437   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a: {Name:mk01aec37fad9eb342e8f4115b2ff616d738d56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:37.191616   22414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a ...
	I0313 23:48:37.191635   22414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a: {Name:mkfba142dfa49e6dea2431f00b6486fa1ca09722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:48:37.191731   22414 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.0105be6a -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:48:37.191892   22414 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.0105be6a -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:48:37.192087   22414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:48:37.192109   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:48:37.192127   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:48:37.192141   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:48:37.192158   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:48:37.192172   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:48:37.192185   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:48:37.192197   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:48:37.192206   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:48:37.192259   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:48:37.192288   22414 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:48:37.192299   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:48:37.192320   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:48:37.192343   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:48:37.192365   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:48:37.192400   22414 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:48:37.192430   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.192444   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.192456   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.192485   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:48:37.195532   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:37.195944   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:48:37.195973   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:37.196102   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:48:37.196252   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:48:37.196368   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:48:37.196468   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:48:37.275190   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0313 23:48:37.281593   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0313 23:48:37.304527   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0313 23:48:37.311458   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0313 23:48:37.322212   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0313 23:48:37.327637   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0313 23:48:37.338878   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0313 23:48:37.344214   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0313 23:48:37.356587   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0313 23:48:37.361373   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0313 23:48:37.375940   22414 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0313 23:48:37.382277   22414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0313 23:48:37.395261   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:48:37.425031   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:48:37.452505   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:48:37.480332   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:48:37.522097   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0313 23:48:37.550670   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0313 23:48:37.576952   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:48:37.605324   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:48:37.633147   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:48:37.658539   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:48:37.683676   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:48:37.708558   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0313 23:48:37.725717   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0313 23:48:37.742461   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0313 23:48:37.759789   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0313 23:48:37.776921   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0313 23:48:37.794144   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0313 23:48:37.812232   22414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0313 23:48:37.829750   22414 ssh_runner.go:195] Run: openssl version
	I0313 23:48:37.835395   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:48:37.846020   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.850417   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.850461   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:48:37.856309   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:48:37.866963   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:48:37.877363   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.881844   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.881885   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:48:37.887483   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:48:37.897775   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:48:37.908109   22414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.912502   22414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.912537   22414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:48:37.918049   22414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:48:37.929065   22414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:48:37.933117   22414 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:48:37.933160   22414 kubeadm.go:928] updating node {m03 192.168.39.156 8443 v1.28.4 crio true true} ...
	I0313 23:48:37.933230   22414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:48:37.933253   22414 kube-vip.go:105] generating kube-vip config ...
	I0313 23:48:37.933278   22414 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:48:37.933311   22414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:48:37.942979   22414 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0313 23:48:37.943028   22414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0313 23:48:37.952766   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0313 23:48:37.952791   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:48:37.952809   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0313 23:48:37.952852   22414 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0313 23:48:37.952860   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0313 23:48:37.952871   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:48:37.952856   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:48:37.952930   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0313 23:48:37.965815   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0313 23:48:37.965843   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0313 23:48:37.965858   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0313 23:48:37.965883   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0313 23:48:38.001795   22414 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:48:38.001893   22414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0313 23:48:38.119684   22414 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0313 23:48:38.119724   22414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0313 23:48:38.987934   22414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0313 23:48:38.998590   22414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0313 23:48:39.016730   22414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:48:39.034203   22414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:48:39.051852   22414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:48:39.056306   22414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:48:39.070636   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:48:39.197836   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:48:39.217277   22414 host.go:66] Checking if "ha-504633" exists ...
	I0313 23:48:39.217775   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:48:39.217830   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:48:39.232885   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0313 23:48:39.233280   22414 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:48:39.233770   22414 main.go:141] libmachine: Using API Version  1
	I0313 23:48:39.233790   22414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:48:39.234162   22414 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:48:39.234411   22414 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:48:39.234582   22414 start.go:316] joinCluster: &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:48:39.234739   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0313 23:48:39.234753   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:48:39.237855   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:39.238365   22414 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:48:39.238390   22414 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:48:39.238567   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:48:39.238747   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:48:39.238911   22414 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:48:39.239058   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:48:39.401171   22414 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:48:39.401218   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8vtd06.300gcezfxmd801mh --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m03 --control-plane --apiserver-advertise-address=192.168.39.156 --apiserver-bind-port=8443"
	I0313 23:49:05.595949   22414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8vtd06.300gcezfxmd801mh --discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-504633-m03 --control-plane --apiserver-advertise-address=192.168.39.156 --apiserver-bind-port=8443": (26.194704025s)
	I0313 23:49:05.596019   22414 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0313 23:49:06.122415   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-504633-m03 minikube.k8s.io/updated_at=2024_03_13T23_49_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=ha-504633 minikube.k8s.io/primary=false
	I0313 23:49:06.291089   22414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-504633-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0313 23:49:06.415982   22414 start.go:318] duration metric: took 27.181396251s to joinCluster
	I0313 23:49:06.416085   22414 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0313 23:49:06.417805   22414 out.go:177] * Verifying Kubernetes components...
	I0313 23:49:06.416449   22414 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:49:06.419289   22414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:49:06.635370   22414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:49:06.655369   22414 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:49:06.655707   22414 kapi.go:59] client config for ha-504633: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.crt", KeyFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key", CAFile:"/home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0313 23:49:06.655797   22414 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.31:8443
	I0313 23:49:06.656074   22414 node_ready.go:35] waiting up to 6m0s for node "ha-504633-m03" to be "Ready" ...
	I0313 23:49:06.656156   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:06.656167   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:06.656177   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:06.656183   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:06.660365   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:07.157146   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:07.157177   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:07.157185   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:07.157194   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:07.161587   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:07.656405   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:07.656426   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:07.656434   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:07.656438   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:07.660558   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:08.156312   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:08.156332   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:08.156340   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:08.156343   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:08.160269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:08.656612   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:08.656633   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:08.656644   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:08.656647   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:08.660190   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:08.660981   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:09.157309   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:09.157337   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:09.157347   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:09.157354   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:09.161762   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:09.656718   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:09.656744   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:09.656755   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:09.656760   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:09.662709   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:10.157200   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:10.157224   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:10.157232   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:10.157236   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:10.160892   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:10.656443   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:10.656465   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:10.656476   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:10.656492   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:10.660269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:11.156342   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:11.156367   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:11.156379   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:11.156384   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:11.160336   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:11.161137   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:11.657012   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:11.657031   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:11.657039   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:11.657043   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:11.660636   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:12.156637   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:12.156659   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:12.156666   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:12.156670   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:12.160269   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:12.657191   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:12.657212   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:12.657222   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:12.657227   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:12.660950   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:13.156718   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:13.156752   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:13.156764   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:13.156769   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:13.161388   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:13.161962   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:13.657047   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:13.657068   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:13.657076   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:13.657080   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:13.660958   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:14.156305   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:14.156327   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:14.156337   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:14.156343   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:14.159935   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:14.656968   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:14.656989   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:14.656997   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:14.657002   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:14.660792   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.156728   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:15.156749   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:15.156756   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:15.156761   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:15.160574   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.657193   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:15.657235   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:15.657263   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:15.657269   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:15.661236   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:15.661987   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:16.156258   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:16.156281   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:16.156292   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:16.156296   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:16.160038   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:16.656366   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:16.656389   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:16.656400   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:16.656406   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:16.661256   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:17.156640   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:17.156672   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:17.156681   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:17.156685   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:17.160708   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:17.656538   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:17.656561   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:17.656573   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:17.656578   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:17.662541   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:17.663303   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:18.156579   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:18.156601   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:18.156609   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:18.156614   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:18.160511   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:18.656359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:18.656382   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:18.656390   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:18.656394   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:18.660023   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:19.156747   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:19.156771   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:19.156780   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:19.156783   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:19.160504   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:19.656225   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:19.656251   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:19.656264   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:19.656270   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:19.660221   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:20.156798   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:20.156819   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:20.156831   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:20.156842   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:20.160836   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:20.161707   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:20.657073   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:20.657093   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:20.657102   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:20.657105   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:20.661497   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:21.156949   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:21.156982   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:21.156993   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:21.156999   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:21.160870   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:21.656450   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:21.656471   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:21.656479   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:21.656483   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:21.660293   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:22.157034   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:22.157062   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:22.157073   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:22.157079   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:22.161438   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:22.162094   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:22.656936   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:22.656956   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:22.656965   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:22.656969   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:22.668835   22414 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0313 23:49:23.156892   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:23.156914   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:23.156921   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:23.156924   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:23.161669   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:23.656492   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:23.656512   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:23.656520   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:23.656524   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:23.660224   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.156249   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:24.156269   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:24.156277   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:24.156282   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:24.160177   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.656890   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:24.656911   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:24.656922   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:24.656927   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:24.660744   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:24.661688   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:25.157161   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:25.157187   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:25.157198   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:25.157202   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:25.160839   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:25.657186   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:25.657206   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:25.657214   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:25.657217   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:25.660681   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:26.156626   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:26.156648   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:26.156657   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:26.156662   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:26.160565   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:26.656997   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:26.657023   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:26.657034   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:26.657043   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:26.660542   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:27.157103   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:27.157132   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:27.157143   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:27.157147   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:27.161748   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:27.162947   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:27.656442   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:27.656461   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:27.656469   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:27.656474   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:27.659981   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:28.156400   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:28.156422   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:28.156429   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:28.156433   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:28.159900   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:28.657085   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:28.657118   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:28.657128   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:28.657134   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:28.660279   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:29.156397   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:29.156432   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:29.156442   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:29.156447   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:29.160531   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:29.656277   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:29.656326   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:29.656336   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:29.656344   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:29.659912   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:29.660580   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:30.156616   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:30.156640   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:30.156650   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:30.156656   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:30.161718   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:30.656380   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:30.656416   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:30.656426   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:30.656434   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:30.659825   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:31.156362   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:31.156390   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:31.156399   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:31.156405   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:31.160760   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:31.657210   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:31.657235   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:31.657248   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:31.657255   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:31.661052   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:31.661725   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:32.156739   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:32.156765   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:32.156777   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:32.156783   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:32.160349   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:32.657026   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:32.657053   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:32.657066   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:32.657071   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:32.660625   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:33.156667   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:33.156689   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:33.156700   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:33.156705   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:33.160593   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:33.656835   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:33.656860   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:33.656874   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:33.656881   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:33.660429   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:34.156274   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:34.156295   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:34.156305   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:34.156310   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:34.159577   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:34.160118   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:34.656292   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:34.656311   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:34.656319   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:34.656323   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:34.660280   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:35.156408   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:35.156430   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:35.156440   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:35.156446   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:35.160043   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:35.656908   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:35.656935   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:35.656948   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:35.656952   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:35.660737   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:36.156633   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:36.156654   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:36.156662   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:36.156668   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:36.160175   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:36.160821   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:36.656664   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:36.656693   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:36.656705   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:36.656711   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:36.660393   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:37.156587   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:37.156614   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:37.156622   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:37.156626   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:37.160590   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:37.656458   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:37.656488   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:37.656500   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:37.656506   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:37.660153   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:38.157039   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:38.157062   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:38.157074   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:38.157079   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:38.161233   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:38.161913   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:38.656563   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:38.656583   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:38.656591   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:38.656595   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:38.660313   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:39.156359   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:39.156382   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:39.156390   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:39.156394   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:39.160204   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:39.657222   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:39.657255   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:39.657263   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:39.657267   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:39.660891   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.157105   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:40.157124   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:40.157132   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:40.157137   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:40.160693   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.656981   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:40.657004   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:40.657013   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:40.657018   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:40.660588   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:40.661073   22414 node_ready.go:53] node "ha-504633-m03" has status "Ready":"False"
	I0313 23:49:41.156480   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:41.156509   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:41.156520   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:41.156525   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:41.160366   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:41.656987   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:41.657009   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:41.657017   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:41.657020   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:41.660801   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.156847   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:42.156874   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.156886   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.156890   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.160900   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.657184   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:42.657205   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.657213   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.657218   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.660663   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.661220   22414 node_ready.go:49] node "ha-504633-m03" has status "Ready":"True"
	I0313 23:49:42.661238   22414 node_ready.go:38] duration metric: took 36.005140846s for node "ha-504633-m03" to be "Ready" ...
	I0313 23:49:42.661248   22414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:49:42.661315   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:42.661327   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.661335   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.661341   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.673305   22414 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0313 23:49:42.679704   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.679780   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dbkfv
	I0313 23:49:42.679787   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.679794   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.679805   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.683229   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.683953   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.683972   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.683983   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.683990   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.687009   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.687634   22414 pod_ready.go:92] pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.687656   22414 pod_ready.go:81] duration metric: took 7.928033ms for pod "coredns-5dd5756b68-dbkfv" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.687667   22414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.687722   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hh2kw
	I0313 23:49:42.687735   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.687742   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.687747   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.690647   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.691458   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.691475   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.691481   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.691484   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.694308   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.695093   22414 pod_ready.go:92] pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.695110   22414 pod_ready.go:81] duration metric: took 7.429038ms for pod "coredns-5dd5756b68-hh2kw" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.695118   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.695158   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633
	I0313 23:49:42.695166   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.695173   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.695175   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.697936   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.698439   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:42.698451   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.698458   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.698461   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.701290   22414 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0313 23:49:42.701693   22414 pod_ready.go:92] pod "etcd-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.701709   22414 pod_ready.go:81] duration metric: took 6.585814ms for pod "etcd-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.701717   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.701763   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m02
	I0313 23:49:42.701771   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.701777   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.701781   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.705482   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:42.705966   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:42.705979   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.705986   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.705990   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.710405   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:42.711113   22414 pod_ready.go:92] pod "etcd-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:42.711127   22414 pod_ready.go:81] duration metric: took 9.40481ms for pod "etcd-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.711135   22414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:42.857547   22414 request.go:629] Waited for 146.335115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m03
	I0313 23:49:42.857614   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-504633-m03
	I0313 23:49:42.857623   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:42.857636   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:42.857644   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:42.861452   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.057319   22414 request.go:629] Waited for 195.291793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:43.057389   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:43.057394   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.057401   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.057404   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.062957   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:43.063923   22414 pod_ready.go:92] pod "etcd-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.063948   22414 pod_ready.go:81] duration metric: took 352.806196ms for pod "etcd-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.063973   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.258198   22414 request.go:629] Waited for 194.156539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:49:43.258250   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633
	I0313 23:49:43.258255   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.258262   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.258267   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.261920   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.457909   22414 request.go:629] Waited for 195.376655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:43.457974   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:43.457979   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.457986   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.457990   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.462063   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:43.462868   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.462893   22414 pod_ready.go:81] duration metric: took 398.910882ms for pod "kube-apiserver-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.462905   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.658023   22414 request.go:629] Waited for 195.045771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:49:43.658096   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m02
	I0313 23:49:43.658107   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.658117   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.658123   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.661935   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.857960   22414 request.go:629] Waited for 195.371095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:43.858055   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:43.858068   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:43.858081   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:43.858088   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:43.861950   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:43.862576   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:43.862599   22414 pod_ready.go:81] duration metric: took 399.683404ms for pod "kube-apiserver-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:43.862611   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.057745   22414 request.go:629] Waited for 195.057927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m03
	I0313 23:49:44.057822   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-504633-m03
	I0313 23:49:44.057832   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.057841   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.057847   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.061771   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:44.257786   22414 request.go:629] Waited for 195.400984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:44.257843   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:44.257847   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.257855   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.257860   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.261973   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:44.262451   22414 pod_ready.go:92] pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:44.262470   22414 pod_ready.go:81] duration metric: took 399.850873ms for pod "kube-apiserver-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.262484   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.458161   22414 request.go:629] Waited for 195.594135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:49:44.458233   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633
	I0313 23:49:44.458244   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.458256   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.458262   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.462588   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:44.657528   22414 request.go:629] Waited for 194.387984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:44.657586   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:44.657592   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.657598   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.657603   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.661301   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:44.662096   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:44.662117   22414 pod_ready.go:81] duration metric: took 399.621338ms for pod "kube-controller-manager-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.662130   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:44.858095   22414 request.go:629] Waited for 195.896254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:49:44.858174   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m02
	I0313 23:49:44.858201   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:44.858213   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:44.858218   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:44.864178   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:45.057218   22414 request.go:629] Waited for 192.330184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.057295   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.057302   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.057312   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.057325   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.060714   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.061326   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.061345   22414 pod_ready.go:81] duration metric: took 399.208021ms for pod "kube-controller-manager-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.061355   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.257479   22414 request.go:629] Waited for 196.049636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m03
	I0313 23:49:45.257530   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-504633-m03
	I0313 23:49:45.257535   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.257543   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.257546   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.261706   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:45.457710   22414 request.go:629] Waited for 195.37714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:45.457791   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:45.457797   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.457804   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.457809   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.461552   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.462172   22414 pod_ready.go:92] pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.462192   22414 pod_ready.go:81] duration metric: took 400.831073ms for pod "kube-controller-manager-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.462201   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.657295   22414 request.go:629] Waited for 195.042177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:49:45.657352   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4s9t5
	I0313 23:49:45.657368   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.657375   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.657380   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.661842   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:45.857850   22414 request.go:629] Waited for 195.383513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.857913   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:45.857931   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:45.857943   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:45.857953   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:45.861846   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:45.862411   22414 pod_ready.go:92] pod "kube-proxy-4s9t5" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:45.862437   22414 pod_ready.go:81] duration metric: took 400.229023ms for pod "kube-proxy-4s9t5" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:45.862450   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgcxp" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.058225   22414 request.go:629] Waited for 195.708482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgcxp
	I0313 23:49:46.058279   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgcxp
	I0313 23:49:46.058284   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.058291   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.058295   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.062068   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.258172   22414 request.go:629] Waited for 195.400958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:46.258238   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:46.258249   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.258259   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.258270   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.261914   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.262333   22414 pod_ready.go:92] pod "kube-proxy-fgcxp" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:46.262351   22414 pod_ready.go:81] duration metric: took 399.893993ms for pod "kube-proxy-fgcxp" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.262360   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.457527   22414 request.go:629] Waited for 195.09857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:49:46.457596   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j56zl
	I0313 23:49:46.457602   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.457609   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.457615   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.461871   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:46.657920   22414 request.go:629] Waited for 195.260373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:46.658013   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:46.658021   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.658032   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.658039   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.662009   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:46.662639   22414 pod_ready.go:92] pod "kube-proxy-j56zl" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:46.662664   22414 pod_ready.go:81] duration metric: took 400.294109ms for pod "kube-proxy-j56zl" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.662676   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:46.857649   22414 request.go:629] Waited for 194.903331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:49:46.857721   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633
	I0313 23:49:46.857727   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:46.857737   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:46.857741   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:46.863018   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:47.058114   22414 request.go:629] Waited for 194.351431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:47.058173   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633
	I0313 23:49:47.058178   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.058186   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.058190   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.061891   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:47.062362   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.062379   22414 pod_ready.go:81] duration metric: took 399.695207ms for pod "kube-scheduler-ha-504633" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.062389   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.257581   22414 request.go:629] Waited for 195.108154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:49:47.257632   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m02
	I0313 23:49:47.257636   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.257644   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.257649   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.261972   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:47.457697   22414 request.go:629] Waited for 195.169907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:47.457764   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m02
	I0313 23:49:47.457772   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.457783   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.457788   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.462134   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:47.463074   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.463094   22414 pod_ready.go:81] duration metric: took 400.698904ms for pod "kube-scheduler-ha-504633-m02" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.463106   22414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.658140   22414 request.go:629] Waited for 194.971007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m03
	I0313 23:49:47.658191   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-504633-m03
	I0313 23:49:47.658197   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.658204   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.658209   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.662107   22414 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0313 23:49:47.857937   22414 request.go:629] Waited for 195.372026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:47.857993   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes/ha-504633-m03
	I0313 23:49:47.858001   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.858010   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.858022   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.864046   22414 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0313 23:49:47.864566   22414 pod_ready.go:92] pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace has status "Ready":"True"
	I0313 23:49:47.864586   22414 pod_ready.go:81] duration metric: took 401.473601ms for pod "kube-scheduler-ha-504633-m03" in "kube-system" namespace to be "Ready" ...
	I0313 23:49:47.864607   22414 pod_ready.go:38] duration metric: took 5.203345886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:49:47.864632   22414 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:49:47.864693   22414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:49:47.884651   22414 api_server.go:72] duration metric: took 41.46852741s to wait for apiserver process to appear ...
	I0313 23:49:47.884684   22414 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:49:47.884705   22414 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I0313 23:49:47.891488   22414 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I0313 23:49:47.891571   22414 round_trippers.go:463] GET https://192.168.39.31:8443/version
	I0313 23:49:47.891583   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:47.891595   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:47.891608   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:47.892898   22414 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0313 23:49:47.893007   22414 api_server.go:141] control plane version: v1.28.4
	I0313 23:49:47.893032   22414 api_server.go:131] duration metric: took 8.340573ms to wait for apiserver health ...
	I0313 23:49:47.893040   22414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:49:48.057341   22414 request.go:629] Waited for 164.218413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.057408   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.057415   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.057431   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.057440   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.065455   22414 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0313 23:49:48.073154   22414 system_pods.go:59] 24 kube-system pods found
	I0313 23:49:48.073180   22414 system_pods.go:61] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:49:48.073184   22414 system_pods.go:61] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:49:48.073193   22414 system_pods.go:61] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:49:48.073196   22414 system_pods.go:61] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:49:48.073199   22414 system_pods.go:61] "etcd-ha-504633-m03" [b1230ab0-c989-4b3e-96c7-f1ea1b866285] Running
	I0313 23:49:48.073202   22414 system_pods.go:61] "kindnet-5gfqz" [d8daf9d8-d130-4a0a-bfc8-a38d276444e1] Running
	I0313 23:49:48.073205   22414 system_pods.go:61] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:49:48.073208   22414 system_pods.go:61] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:49:48.073211   22414 system_pods.go:61] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:49:48.073214   22414 system_pods.go:61] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:49:48.073217   22414 system_pods.go:61] "kube-apiserver-ha-504633-m03" [06b73358-0ea8-4b7e-b245-e3dea0a5a321] Running
	I0313 23:49:48.073220   22414 system_pods.go:61] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:49:48.073223   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:49:48.073226   22414 system_pods.go:61] "kube-controller-manager-ha-504633-m03" [93b8e260-d800-43d1-9b09-d72d7791b9db] Running
	I0313 23:49:48.073228   22414 system_pods.go:61] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:49:48.073231   22414 system_pods.go:61] "kube-proxy-fgcxp" [7ef9b719-adf6-4d07-9d11-9df0b5e923a6] Running
	I0313 23:49:48.073234   22414 system_pods.go:61] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:49:48.073237   22414 system_pods.go:61] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:49:48.073242   22414 system_pods.go:61] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:49:48.073247   22414 system_pods.go:61] "kube-scheduler-ha-504633-m03" [de4d66e3-bec6-4dbd-ade8-d252b040ad68] Running
	I0313 23:49:48.073253   22414 system_pods.go:61] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073267   22414 system_pods.go:61] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073275   22414 system_pods.go:61] "kube-vip-ha-504633-m03" [3a6ecc18-b04d-43b3-bdc0-82b1f75b6a4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.073279   22414 system_pods.go:61] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:49:48.073288   22414 system_pods.go:74] duration metric: took 180.240776ms to wait for pod list to return data ...
	I0313 23:49:48.073297   22414 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:49:48.257752   22414 request.go:629] Waited for 184.393715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:49:48.257806   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/default/serviceaccounts
	I0313 23:49:48.257811   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.257818   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.257822   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.262100   22414 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0313 23:49:48.262231   22414 default_sa.go:45] found service account: "default"
	I0313 23:49:48.262252   22414 default_sa.go:55] duration metric: took 188.948599ms for default service account to be created ...
	I0313 23:49:48.262262   22414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:49:48.457611   22414 request.go:629] Waited for 195.270655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.457681   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/namespaces/kube-system/pods
	I0313 23:49:48.457689   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.457700   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.457704   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.467177   22414 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0313 23:49:48.472914   22414 system_pods.go:86] 24 kube-system pods found
	I0313 23:49:48.472944   22414 system_pods.go:89] "coredns-5dd5756b68-dbkfv" [bb55bb86-7637-4571-af89-55b34361d46f] Running
	I0313 23:49:48.472949   22414 system_pods.go:89] "coredns-5dd5756b68-hh2kw" [ac81d022-8c47-4f99-8a34-bb4f73ead561] Running
	I0313 23:49:48.472954   22414 system_pods.go:89] "etcd-ha-504633" [eddb386f-0b62-4325-a14d-c2bb03e656bb] Running
	I0313 23:49:48.472958   22414 system_pods.go:89] "etcd-ha-504633-m02" [faf6ca7d-343c-4a0d-91ed-d2a401952f47] Running
	I0313 23:49:48.472962   22414 system_pods.go:89] "etcd-ha-504633-m03" [b1230ab0-c989-4b3e-96c7-f1ea1b866285] Running
	I0313 23:49:48.472967   22414 system_pods.go:89] "kindnet-5gfqz" [d8daf9d8-d130-4a0a-bfc8-a38d276444e1] Running
	I0313 23:49:48.472970   22414 system_pods.go:89] "kindnet-8kvnb" [b356234a-5293-417c-b78f-8d532dfe1bc1] Running
	I0313 23:49:48.472974   22414 system_pods.go:89] "kindnet-f4pz8" [9df1057a-d870-4e77-9261-0db3a8f2700f] Running
	I0313 23:49:48.472979   22414 system_pods.go:89] "kube-apiserver-ha-504633" [f4d0ca2b-c730-43a3-838f-05d28db9de36] Running
	I0313 23:49:48.472986   22414 system_pods.go:89] "kube-apiserver-ha-504633-m02" [3eff93a9-6e69-4837-9fe0-4357ae747b22] Running
	I0313 23:49:48.472992   22414 system_pods.go:89] "kube-apiserver-ha-504633-m03" [06b73358-0ea8-4b7e-b245-e3dea0a5a321] Running
	I0313 23:49:48.473003   22414 system_pods.go:89] "kube-controller-manager-ha-504633" [2a252519-49e0-4dce-81b3-73d6ea16b4d1] Running
	I0313 23:49:48.473007   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m02" [b0ead72b-b8f6-449b-8f08-98bb5181b597] Running
	I0313 23:49:48.473011   22414 system_pods.go:89] "kube-controller-manager-ha-504633-m03" [93b8e260-d800-43d1-9b09-d72d7791b9db] Running
	I0313 23:49:48.473015   22414 system_pods.go:89] "kube-proxy-4s9t5" [e635f7e7-bc86-47d1-8368-db03fda06076] Running
	I0313 23:49:48.473019   22414 system_pods.go:89] "kube-proxy-fgcxp" [7ef9b719-adf6-4d07-9d11-9df0b5e923a6] Running
	I0313 23:49:48.473023   22414 system_pods.go:89] "kube-proxy-j56zl" [9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4] Running
	I0313 23:49:48.473027   22414 system_pods.go:89] "kube-scheduler-ha-504633" [41980454-97cd-4fa1-ac32-edc1e1c5bc02] Running
	I0313 23:49:48.473033   22414 system_pods.go:89] "kube-scheduler-ha-504633-m02" [24d5f3a2-02f9-4f0b-b1ea-94b8c065b6e4] Running
	I0313 23:49:48.473037   22414 system_pods.go:89] "kube-scheduler-ha-504633-m03" [de4d66e3-bec6-4dbd-ade8-d252b040ad68] Running
	I0313 23:49:48.473046   22414 system_pods.go:89] "kube-vip-ha-504633" [eed1a72e-a42d-4961-b3bf-f3a2bb7fa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473054   22414 system_pods.go:89] "kube-vip-ha-504633-m02" [88577446-c879-45b6-a7f6-6b76e9e26fcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473061   22414 system_pods.go:89] "kube-vip-ha-504633-m03" [3a6ecc18-b04d-43b3-bdc0-82b1f75b6a4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0313 23:49:48.473067   22414 system_pods.go:89] "storage-provisioner" [0e57f625-8927-418c-bdf2-9022439f858c] Running
	I0313 23:49:48.473074   22414 system_pods.go:126] duration metric: took 210.806744ms to wait for k8s-apps to be running ...
	I0313 23:49:48.473083   22414 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:49:48.473125   22414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:49:48.489892   22414 system_svc.go:56] duration metric: took 16.801333ms WaitForService to wait for kubelet
	I0313 23:49:48.489925   22414 kubeadm.go:576] duration metric: took 42.073801943s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:49:48.489948   22414 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:49:48.657767   22414 request.go:629] Waited for 167.730049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.31:8443/api/v1/nodes
	I0313 23:49:48.657818   22414 round_trippers.go:463] GET https://192.168.39.31:8443/api/v1/nodes
	I0313 23:49:48.657823   22414 round_trippers.go:469] Request Headers:
	I0313 23:49:48.657831   22414 round_trippers.go:473]     Accept: application/json, */*
	I0313 23:49:48.657837   22414 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0313 23:49:48.663597   22414 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0313 23:49:48.664893   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664912   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664922   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664925   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664930   22414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:49:48.664934   22414 node_conditions.go:123] node cpu capacity is 2
	I0313 23:49:48.664937   22414 node_conditions.go:105] duration metric: took 174.984846ms to run NodePressure ...
	I0313 23:49:48.664948   22414 start.go:240] waiting for startup goroutines ...
	I0313 23:49:48.664969   22414 start.go:254] writing updated cluster config ...
	I0313 23:49:48.665215   22414 ssh_runner.go:195] Run: rm -f paused
	I0313 23:49:48.718671   22414 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0313 23:49:48.720821   22414 out.go:177] * Done! kubectl is now configured to use "ha-504633" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.954706163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374065954679090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93724b24-e29d-4f44-b35d-7044833a39ff name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.955388530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36c477c9-9802-4311-b9cd-881758d0c916 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.955437370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36c477c9-9802-4311-b9cd-881758d0c916 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.955668734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36c477c9-9802-4311-b9cd-881758d0c916 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.993917253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1efb791d-70f9-444b-bcb0-d5ecf1990752 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.994064364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1efb791d-70f9-444b-bcb0-d5ecf1990752 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.995473787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5480e9fd-1b93-4b82-b193-03a7f9e92aae name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.995914075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374065995889246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5480e9fd-1b93-4b82-b193-03a7f9e92aae name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.996474183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abeff431-e4f7-414f-b7e1-1468b8a4942c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.996522220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abeff431-e4f7-414f-b7e1-1468b8a4942c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:25 ha-504633 crio[677]: time="2024-03-13 23:54:25.996787276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abeff431-e4f7-414f-b7e1-1468b8a4942c name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.036703076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e5da7bb-09f3-4c61-942e-fb838f152c00 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.036776894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e5da7bb-09f3-4c61-942e-fb838f152c00 name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.038326009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8be8c941-078a-4efa-813d-cb4cda629b31 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.038930493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374066038904644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8be8c941-078a-4efa-813d-cb4cda629b31 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.039590297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b73eb0c0-5a9e-47bd-a19a-5eabc5b14bb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.039642523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b73eb0c0-5a9e-47bd-a19a-5eabc5b14bb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.039876575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b73eb0c0-5a9e-47bd-a19a-5eabc5b14bb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.079597740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b226bf9-48b4-433e-8343-3929c512e0ff name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.079671512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b226bf9-48b4-433e-8343-3929c512e0ff name=/runtime.v1.RuntimeService/Version
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.081296481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbef2670-ff47-41e4-be1b-d87cd96ca62e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.081725600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374066081700490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbef2670-ff47-41e4-be1b-d87cd96ca62e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.082256885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dacd8d1-663c-41bc-a0f8-f9b6dd6512ec name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.082349107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dacd8d1-663c-41bc-a0f8-f9b6dd6512ec name=/runtime.v1.RuntimeService/ListContainers
	Mar 13 23:54:26 ha-504633 crio[677]: time="2024-03-13 23:54:26.082597032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710373871791346467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710373793336158016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710373668554301132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0cf88a442bd5269582c822abc0a7930b457d582051bd1ba8ee6c91da797c0,PodSandboxId:f7efac86eab07943f63f039fff185e34edc65262c894e0149c854a4698b230c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710373535030450727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534967959757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710373534992420839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c,PodSandboxId:735a5bdd8eef7f11df3fa830558ddbe61791bece0de398767fc4db3bbdee441e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710373533407104753,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710373529504359284,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710373509707648764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710373509711823598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1,PodSandboxId:fd22a4b33ad9b49285775f14547dd4ed73c1ca85c0e858aed820073cb0006444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710373509625494690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504
633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9,PodSandboxId:6ae040a8c89c0b7014c84995dc85f3d22dcbe425ba9c24179907f7e70b7fb7f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710373509619646191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dacd8d1-663c-41bc-a0f8-f9b6dd6512ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8cd8ab250ed1       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      3 minutes ago       Exited              kube-vip                  7                   6664331d2d846       kube-vip-ha-504633
	3e670be31d057       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   44694d6d0ddb1       busybox-5b5d89c9d6-dx92g
	aadb470eed29b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   f7efac86eab07       storage-provisioner
	d6d0cf88a442b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       0                   f7efac86eab07       storage-provisioner
	91c5fdb6071ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      8 minutes ago       Running             coredns                   0                   ac06f7523df34       coredns-5dd5756b68-dbkfv
	cea68e46e7574       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      8 minutes ago       Running             coredns                   0                   99eec3703a3ac       coredns-5dd5756b68-hh2kw
	b87585aab2e4e       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    8 minutes ago       Running             kindnet-cni               0                   735a5bdd8eef7       kindnet-8kvnb
	ce0dc1e514cfe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      8 minutes ago       Running             kube-proxy                0                   508491d3a970a       kube-proxy-j56zl
	ec04eb9f36ad1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      9 minutes ago       Running             etcd                      0                   2e892e8826932       etcd-ha-504633
	03595624eed74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      9 minutes ago       Running             kube-scheduler            0                   e5651d5d4cdf1       kube-scheduler-ha-504633
	f760286dfea8a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      9 minutes ago       Running             kube-controller-manager   0                   fd22a4b33ad9b       kube-controller-manager-ha-504633
	581070edea465       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      9 minutes ago       Running             kube-apiserver            0                   6ae040a8c89c0       kube-apiserver-ha-504633
	
	
	==> coredns [91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025] <==
	[INFO] 10.244.0.4:34622 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180157s
	[INFO] 10.244.0.4:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010268s
	[INFO] 10.244.0.4:45464 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106179s
	[INFO] 10.244.2.2:37253 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138314s
	[INFO] 10.244.2.2:37661 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945874s
	[INFO] 10.244.2.2:45263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00157594s
	[INFO] 10.244.2.2:56184 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095082s
	[INFO] 10.244.2.2:38062 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145314s
	[INFO] 10.244.2.2:47535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099682s
	[INFO] 10.244.1.2:38146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248518s
	[INFO] 10.244.1.2:54521 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160289s
	[INFO] 10.244.1.2:34985 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396473s
	[INFO] 10.244.1.2:37504 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127175s
	[INFO] 10.244.1.2:47786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089644s
	[INFO] 10.244.0.4:42865 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167315s
	[INFO] 10.244.2.2:37374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167385s
	[INFO] 10.244.2.2:33251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009522s
	[INFO] 10.244.1.2:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158704s
	[INFO] 10.244.1.2:36398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143215s
	[INFO] 10.244.1.2:60528 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012073s
	[INFO] 10.244.1.2:45057 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013653s
	[INFO] 10.244.0.4:55605 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153423s
	[INFO] 10.244.1.2:37595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218212s
	[INFO] 10.244.1.2:45054 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155156s
	[INFO] 10.244.1.2:45734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159775s
	
	
	==> coredns [cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d] <==
	[INFO] 127.0.0.1:60482 - 8231 "HINFO IN 4188345321067739738.1742461500624588533. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008155088s
	[INFO] 10.244.2.2:60205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002156886s
	[INFO] 10.244.1.2:53349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212493s
	[INFO] 10.244.1.2:36980 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001402731s
	[INFO] 10.244.0.4:41863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106374s
	[INFO] 10.244.0.4:36734 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153111s
	[INFO] 10.244.0.4:36918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002576888s
	[INFO] 10.244.2.2:52506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216481s
	[INFO] 10.244.2.2:41181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142291s
	[INFO] 10.244.1.2:41560 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185807s
	[INFO] 10.244.1.2:34843 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104567s
	[INFO] 10.244.1.2:36490 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226318s
	[INFO] 10.244.0.4:60091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107953s
	[INFO] 10.244.0.4:37327 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151724s
	[INFO] 10.244.0.4:35399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043972s
	[INFO] 10.244.2.2:59809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090745s
	[INFO] 10.244.2.2:40239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069623s
	[INFO] 10.244.0.4:36867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127937s
	[INFO] 10.244.0.4:35854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195121s
	[INFO] 10.244.0.4:56742 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109765s
	[INFO] 10.244.2.2:33696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132875s
	[INFO] 10.244.2.2:51474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149174s
	[INFO] 10.244.2.2:58642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010185s
	[INFO] 10.244.2.2:58203 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089769s
	[INFO] 10.244.1.2:54587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118471s
	
	
	==> describe nodes <==
	Name:               ha-504633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:50:21 +0000   Wed, 13 Mar 2024 23:45:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    ha-504633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13fd8f4b90794ddf8d3d6bdb9051c529
	  System UUID:                13fd8f4b-9079-4ddf-8d3d-6bdb9051c529
	  Boot ID:                    83daf814-565c-4717-8930-43f7c53558eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dx92g             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-5dd5756b68-dbkfv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m58s
	  kube-system                 coredns-5dd5756b68-hh2kw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m58s
	  kube-system                 etcd-ha-504633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m9s
	  kube-system                 kindnet-8kvnb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m58s
	  kube-system                 kube-apiserver-ha-504633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-controller-manager-ha-504633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-proxy-j56zl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-scheduler-ha-504633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-vip-ha-504633                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m56s                  kube-proxy       
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m17s (x7 over 9m18s)  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m17s (x8 over 9m18s)  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s (x8 over 9m18s)  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m10s                  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m10s                  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m10s                  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m59s                  node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal  NodeReady                8m52s                  kubelet          Node ha-504633 status is now: NodeReady
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	
	
	Name:               ha-504633-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:47:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:51:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 13 Mar 2024 23:50:09 +0000   Wed, 13 Mar 2024 23:51:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-504633-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6ba1a02ba14580ac16771f2b426854
	  System UUID:                5f6ba1a0-2ba1-4580-ac16-771f2b426854
	  Boot ID:                    d6e314b0-19ea-491a-ae7d-e96708f9fad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zfjjt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-504633-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m50s
	  kube-system                 kindnet-f4pz8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m50s
	  kube-system                 kube-apiserver-ha-504633-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-controller-manager-ha-504633-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-proxy-4s9t5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-scheduler-ha-504633-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 kube-vip-ha-504633-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m34s  kube-proxy       
	  Normal  RegisteredNode  6m49s  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode  6m21s  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode  5m6s   node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  NodeNotReady    2m43s  node-controller  Node ha-504633-m02 status is now: NodeNotReady
	
	
	Name:               ha-504633-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_49_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:49:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:54:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:50:10 +0000   Wed, 13 Mar 2024 23:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-504633-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8771be2dfdfd44f18d592fcb20bb5a4c
	  System UUID:                8771be2d-fdfd-44f1-8d59-2fcb20bb5a4c
	  Boot ID:                    72dccc4c-7d49-4586-a425-779d86f055c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-prmkb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-504633-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-5gfqz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m23s
	  kube-system                 kube-apiserver-ha-504633-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-controller-manager-ha-504633-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-proxy-fgcxp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-scheduler-ha-504633-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-vip-ha-504633-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m10s  kube-proxy       
	  Normal  RegisteredNode  5m21s  node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal  RegisteredNode  5m19s  node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal  RegisteredNode  5m6s   node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	
	
	Name:               ha-504633-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:51:05 +0000   Wed, 13 Mar 2024 23:50:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-504633-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d985b67edcea4528bf49bb9fe5eeb65e
	  System UUID:                d985b67e-dcea-4528-bf49-bb9fe5eeb65e
	  Boot ID:                    e84e96f1-dcb9-4264-902b-3879a0b7824e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dn6gl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-7hr7b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x5 over 3m53s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x5 over 3m53s)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x5 over 3m53s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal  NodeReady                3m41s                  kubelet          Node ha-504633-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar13 23:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054621] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040971] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527621] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.806595] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.718708] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.715783] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.171003] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142829] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.235386] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Mar13 23:45] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.057845] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.706645] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.862236] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.155181] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.379152] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[ +12.986535] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.322868] kauditd_printk_skb: 43 callbacks suppressed
	[Mar13 23:46] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714] <==
	{"level":"warn","ts":"2024-03-13T23:54:26.22266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.264135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.266471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.322775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.353566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.361556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.366046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.381587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.392504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.422334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.426184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.433105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.437359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.438228Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f97767851b864cd5","rtt":"9.495799ms","error":"dial tcp 192.168.39.47:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-13T23:54:26.438632Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f97767851b864cd5","rtt":"2.123949ms","error":"dial tcp 192.168.39.47:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-13T23:54:26.441879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.451473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.459736Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.467153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.47269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.47652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.482828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.489919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.497429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:54:26.522768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:54:26 up 9 min,  0 users,  load average: 0.45, 0.43, 0.26
	Linux ha-504633 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c] <==
	I0313 23:53:51.396161       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:54:01.414244       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:54:01.414403       1 main.go:227] handling current node
	I0313 23:54:01.414503       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:54:01.414534       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:54:01.414879       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:54:01.414911       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:54:01.415205       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:54:01.415300       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:54:11.430277       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:54:11.430386       1 main.go:227] handling current node
	I0313 23:54:11.430415       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:54:11.430435       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:54:11.430569       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:54:11.430590       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:54:11.430660       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:54:11.430679       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0313 23:54:21.447807       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0313 23:54:21.447914       1 main.go:227] handling current node
	I0313 23:54:21.447943       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0313 23:54:21.447961       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0313 23:54:21.448220       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0313 23:54:21.448245       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0313 23:54:21.448312       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0313 23:54:21.448331       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9] <==
	Trace[1382010347]: ["GuaranteedUpdate etcd3" audit-id:9d800dc8-9d5c-4334-b46d-d9312af116de,key:/minions/ha-504633-m02,type:*core.Node,resource:nodes 2897ms (23:47:47.973)
	Trace[1382010347]:  ---"Txn call completed" 2894ms (23:47:50.869)]
	Trace[1382010347]: ---"About to apply patch" 2894ms (23:47:50.869)
	Trace[1382010347]: [2.897617525s] [2.897617525s] END
	I0313 23:47:50.874188       1 trace.go:236] Trace[1891003387]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:77fe511b-1e93-448f-8542-759fa0cc00eb,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-504633,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (13-Mar-2024 23:47:47.105) (total time: 3768ms):
	Trace[1891003387]: ["GuaranteedUpdate etcd3" audit-id:77fe511b-1e93-448f-8542-759fa0cc00eb,key:/leases/kube-node-lease/ha-504633,type:*coordination.Lease,resource:leases.coordination.k8s.io 3768ms (23:47:47.105)
	Trace[1891003387]:  ---"Txn call completed" 3767ms (23:47:50.873)]
	Trace[1891003387]: [3.768087495s] [3.768087495s] END
	I0313 23:47:50.875545       1 trace.go:236] Trace[1251262935]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a2b7579e-f11c-4fdd-8bb1-1135281f6eb5,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-6jfanw7f7nh6bubgtbpmxrwaa4,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (13-Mar-2024 23:47:45.930) (total time: 4945ms):
	Trace[1251262935]: ["GuaranteedUpdate etcd3" audit-id:a2b7579e-f11c-4fdd-8bb1-1135281f6eb5,key:/leases/kube-system/apiserver-6jfanw7f7nh6bubgtbpmxrwaa4,type:*coordination.Lease,resource:leases.coordination.k8s.io 4945ms (23:47:45.930)
	Trace[1251262935]:  ---"Txn call completed" 4944ms (23:47:50.875)]
	Trace[1251262935]: [4.945246684s] [4.945246684s] END
	I0313 23:47:50.902375       1 trace.go:236] Trace[1177651587]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a69b7365-dad8-420a-90e3-79bbf70dbe0a,client:192.168.39.47,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-504633-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (13-Mar-2024 23:47:46.404) (total time: 4497ms):
	Trace[1177651587]: ["GuaranteedUpdate etcd3" audit-id:a69b7365-dad8-420a-90e3-79bbf70dbe0a,key:/minions/ha-504633-m02,type:*core.Node,resource:nodes 4497ms (23:47:46.404)
	Trace[1177651587]:  ---"Txn call completed" 4460ms (23:47:50.867)
	Trace[1177651587]:  ---"Txn call completed" 32ms (23:47:50.901)]
	Trace[1177651587]: ---"About to apply patch" 4461ms (23:47:50.867)
	Trace[1177651587]: ---"Object stored in database" 32ms (23:47:50.901)
	Trace[1177651587]: [4.49778148s] [4.49778148s] END
	I0313 23:47:50.911845       1 trace.go:236] Trace[667313303]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3b29c8fd-5302-42f6-95ab-621f32af71b0,client:192.168.39.47,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (13-Mar-2024 23:47:45.866) (total time: 5045ms):
	Trace[667313303]: [5.045079797s] [5.045079797s] END
	I0313 23:47:50.937552       1 trace.go:236] Trace[247215225]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:45535216-b5d7-433c-ae45-eab8672b8af7,client:192.168.39.47,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (13-Mar-2024 23:47:43.862) (total time: 7075ms):
	Trace[247215225]: ---"Write to database call failed" len:2991,err:pods "kube-apiserver-ha-504633-m02" already exists 18ms (23:47:50.937)
	Trace[247215225]: [7.075134913s] [7.075134913s] END
	W0313 23:51:24.992862       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.31]
	
	
	==> kube-controller-manager [f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1] <==
	I0313 23:49:55.430231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="110.843µs"
	I0313 23:49:59.938849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.252706ms"
	I0313 23:49:59.939150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.928µs"
	I0313 23:50:34.641412       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-504633-m04\" does not exist"
	I0313 23:50:34.656553       1 range_allocator.go:380] "Set node PodCIDR" node="ha-504633-m04" podCIDRs=["10.244.3.0/24"]
	I0313 23:50:34.685765       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7hr7b"
	I0313 23:50:34.699861       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hnxz6"
	I0313 23:50:34.811917       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-4pm44"
	I0313 23:50:34.887484       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-npx4n"
	I0313 23:50:34.943758       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-bmf5z"
	I0313 23:50:34.955698       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hnxz6"
	I0313 23:50:37.986035       1 event.go:307] "Event occurred" object="ha-504633-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller"
	I0313 23:50:38.000824       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-504633-m04"
	I0313 23:50:45.325321       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0313 23:51:43.029396       1 event.go:307] "Event occurred" object="ha-504633-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-504633-m02 status is now: NodeNotReady"
	I0313 23:51:43.032014       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0313 23:51:43.042655       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.069204       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.086438       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.104161       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-504633-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.120549       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-zfjjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.142400       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4s9t5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.174126       1 event.go:307] "Event occurred" object="kube-system/kindnet-f4pz8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0313 23:51:43.185285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.704053ms"
	I0313 23:51:43.185414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.937µs"
	
	
	==> kube-proxy [ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a] <==
	I0313 23:45:29.711578       1 server_others.go:69] "Using iptables proxy"
	I0313 23:45:29.730452       1 node.go:141] Successfully retrieved node IP: 192.168.39.31
	I0313 23:45:29.778135       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:45:29.778173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:45:29.781710       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:45:29.782511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:45:29.782796       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:45:29.782835       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:45:29.784428       1 config.go:188] "Starting service config controller"
	I0313 23:45:29.785222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:45:29.785343       1 config.go:315] "Starting node config controller"
	I0313 23:45:29.785372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:45:29.785796       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:45:29.785829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:45:29.885734       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:45:29.885761       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:45:29.886938       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33] <==
	I0313 23:45:16.354108       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0313 23:49:03.728254       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fgcxp\": pod kube-proxy-fgcxp is already assigned to node \"ha-504633-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fgcxp" node="ha-504633-m03"
	E0313 23:49:03.728412       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7ef9b719-adf6-4d07-9d11-9df0b5e923a6(kube-system/kube-proxy-fgcxp) wasn't assumed so cannot be forgotten"
	E0313 23:49:03.728517       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fgcxp\": pod kube-proxy-fgcxp is already assigned to node \"ha-504633-m03\"" pod="kube-system/kube-proxy-fgcxp"
	I0313 23:49:03.728602       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fgcxp" node="ha-504633-m03"
	E0313 23:49:03.728755       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5gfqz\": pod kindnet-5gfqz is already assigned to node \"ha-504633-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-5gfqz" node="ha-504633-m03"
	E0313 23:49:03.728931       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod d8daf9d8-d130-4a0a-bfc8-a38d276444e1(kube-system/kindnet-5gfqz) wasn't assumed so cannot be forgotten"
	E0313 23:49:03.729044       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5gfqz\": pod kindnet-5gfqz is already assigned to node \"ha-504633-m03\"" pod="kube-system/kindnet-5gfqz"
	I0313 23:49:03.729143       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5gfqz" node="ha-504633-m03"
	E0313 23:49:49.750063       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zfjjt\": pod busybox-5b5d89c9d6-zfjjt is already assigned to node \"ha-504633-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-zfjjt" node="ha-504633-m02"
	E0313 23:49:49.750226       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-zfjjt\": pod busybox-5b5d89c9d6-zfjjt is already assigned to node \"ha-504633-m02\"" pod="default/busybox-5b5d89c9d6-zfjjt"
	E0313 23:49:49.752680       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dx92g\": pod busybox-5b5d89c9d6-dx92g is already assigned to node \"ha-504633\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-dx92g" node="ha-504633"
	E0313 23:49:49.752878       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e4da8d7b-2fcc-46b3-a6a3-12f23d16de43(default/busybox-5b5d89c9d6-dx92g) wasn't assumed so cannot be forgotten"
	E0313 23:49:49.754271       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dx92g\": pod busybox-5b5d89c9d6-dx92g is already assigned to node \"ha-504633\"" pod="default/busybox-5b5d89c9d6-dx92g"
	I0313 23:49:49.755372       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-dx92g" node="ha-504633"
	E0313 23:50:34.722286       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7hr7b\": pod kube-proxy-7hr7b is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7hr7b" node="ha-504633-m04"
	E0313 23:50:34.723189       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 6283f13a-f061-4d3b-a492-30bffd8d4201(kube-system/kube-proxy-7hr7b) wasn't assumed so cannot be forgotten"
	E0313 23:50:34.723330       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7hr7b\": pod kube-proxy-7hr7b is already assigned to node \"ha-504633-m04\"" pod="kube-system/kube-proxy-7hr7b"
	I0313 23:50:34.723401       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7hr7b" node="ha-504633-m04"
	E0313 23:50:34.798955       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-npx4n\": pod kube-proxy-npx4n is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-npx4n" node="ha-504633-m04"
	E0313 23:50:34.799432       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-npx4n\": pod kube-proxy-npx4n is already assigned to node \"ha-504633-m04\"" pod="kube-system/kube-proxy-npx4n"
	E0313 23:50:34.800517       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pm44\": pod kindnet-4pm44 is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pm44" node="ha-504633-m04"
	E0313 23:50:34.800734       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ae5a1328-27b9-4887-9376-743463d7efda(kube-system/kindnet-4pm44) wasn't assumed so cannot be forgotten"
	E0313 23:50:34.800795       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pm44\": pod kindnet-4pm44 is already assigned to node \"ha-504633-m04\"" pod="kube-system/kindnet-4pm44"
	I0313 23:50:34.800828       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4pm44" node="ha-504633-m04"
	
	
	==> kubelet <==
	Mar 13 23:52:50 ha-504633 kubelet[1439]: E0313 23:52:50.779817    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:04 ha-504633 kubelet[1439]: I0313 23:53:04.776807    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:04 ha-504633 kubelet[1439]: E0313 23:53:04.777429    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:15 ha-504633 kubelet[1439]: I0313 23:53:15.776809    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:15 ha-504633 kubelet[1439]: E0313 23:53:15.778012    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:16 ha-504633 kubelet[1439]: E0313 23:53:16.828498    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 13 23:53:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 13 23:53:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 13 23:53:27 ha-504633 kubelet[1439]: I0313 23:53:27.776642    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:27 ha-504633 kubelet[1439]: E0313 23:53:27.777418    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:41 ha-504633 kubelet[1439]: I0313 23:53:41.776652    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:41 ha-504633 kubelet[1439]: E0313 23:53:41.777405    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:53:52 ha-504633 kubelet[1439]: I0313 23:53:52.776885    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:53:52 ha-504633 kubelet[1439]: E0313 23:53:52.777288    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:54:05 ha-504633 kubelet[1439]: I0313 23:54:05.777215    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:54:05 ha-504633 kubelet[1439]: E0313 23:54:05.778125    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:54:16 ha-504633 kubelet[1439]: E0313 23:54:16.827363    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 13 23:54:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 13 23:54:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 13 23:54:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 13 23:54:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 13 23:54:19 ha-504633 kubelet[1439]: I0313 23:54:19.776744    1439 scope.go:117] "RemoveContainer" containerID="b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	Mar 13 23:54:19 ha-504633 kubelet[1439]: E0313 23:54:19.777073    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-504633 -n ha-504633
helpers_test.go:261: (dbg) Run:  kubectl --context ha-504633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (55.43s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (377.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-504633 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-504633 -v=7 --alsologtostderr
E0313 23:54:44.449159   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:56:07.498291   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-504633 -v=7 --alsologtostderr: exit status 82 (2m2.029906251s)

                                                
                                                
-- stdout --
	* Stopping node "ha-504633-m04"  ...
	* Stopping node "ha-504633-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:54:28.069968   28027 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:54:28.070097   28027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:28.070105   28027 out.go:304] Setting ErrFile to fd 2...
	I0313 23:54:28.070110   28027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:54:28.070354   28027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:54:28.070647   28027 out.go:298] Setting JSON to false
	I0313 23:54:28.070732   28027 mustload.go:65] Loading cluster: ha-504633
	I0313 23:54:28.071132   28027 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:54:28.071216   28027 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:54:28.071416   28027 mustload.go:65] Loading cluster: ha-504633
	I0313 23:54:28.071579   28027 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:54:28.071607   28027 stop.go:39] StopHost: ha-504633-m04
	I0313 23:54:28.071970   28027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:28.072009   28027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:28.086254   28027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0313 23:54:28.086706   28027 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:28.087260   28027 main.go:141] libmachine: Using API Version  1
	I0313 23:54:28.087283   28027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:28.087652   28027 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:28.091324   28027 out.go:177] * Stopping node "ha-504633-m04"  ...
	I0313 23:54:28.092647   28027 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0313 23:54:28.092679   28027 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0313 23:54:28.092883   28027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0313 23:54:28.092902   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0313 23:54:28.095909   28027 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:28.096375   28027 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:50:19 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0313 23:54:28.096408   28027 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0313 23:54:28.096554   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0313 23:54:28.096725   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0313 23:54:28.096848   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0313 23:54:28.096988   28027 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0313 23:54:28.181801   28027 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0313 23:54:28.235933   28027 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0313 23:54:28.290043   28027 main.go:141] libmachine: Stopping "ha-504633-m04"...
	I0313 23:54:28.290081   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:54:28.291803   28027 main.go:141] libmachine: (ha-504633-m04) Calling .Stop
	I0313 23:54:28.295330   28027 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 0/120
	I0313 23:54:29.612739   28027 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0313 23:54:29.614078   28027 main.go:141] libmachine: Machine "ha-504633-m04" was stopped.
	I0313 23:54:29.614096   28027 stop.go:75] duration metric: took 1.521451335s to stop
	I0313 23:54:29.614116   28027 stop.go:39] StopHost: ha-504633-m03
	I0313 23:54:29.614391   28027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:54:29.614432   28027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:54:29.629397   28027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0313 23:54:29.629859   28027 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:54:29.630421   28027 main.go:141] libmachine: Using API Version  1
	I0313 23:54:29.630445   28027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:54:29.630795   28027 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:54:29.632977   28027 out.go:177] * Stopping node "ha-504633-m03"  ...
	I0313 23:54:29.634570   28027 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0313 23:54:29.634593   28027 main.go:141] libmachine: (ha-504633-m03) Calling .DriverName
	I0313 23:54:29.634823   28027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0313 23:54:29.634850   28027 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHHostname
	I0313 23:54:29.638097   28027 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:29.638499   28027 main.go:141] libmachine: (ha-504633-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:1d:f9", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:21 +0000 UTC Type:0 Mac:52:54:00:94:1d:f9 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-504633-m03 Clientid:01:52:54:00:94:1d:f9}
	I0313 23:54:29.638530   28027 main.go:141] libmachine: (ha-504633-m03) DBG | domain ha-504633-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:94:1d:f9 in network mk-ha-504633
	I0313 23:54:29.638694   28027 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHPort
	I0313 23:54:29.638902   28027 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHKeyPath
	I0313 23:54:29.639097   28027 main.go:141] libmachine: (ha-504633-m03) Calling .GetSSHUsername
	I0313 23:54:29.639222   28027 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m03/id_rsa Username:docker}
	I0313 23:54:29.723799   28027 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0313 23:54:29.779238   28027 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0313 23:54:29.839008   28027 main.go:141] libmachine: Stopping "ha-504633-m03"...
	I0313 23:54:29.839034   28027 main.go:141] libmachine: (ha-504633-m03) Calling .GetState
	I0313 23:54:29.840553   28027 main.go:141] libmachine: (ha-504633-m03) Calling .Stop
	I0313 23:54:29.844249   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 0/120
	I0313 23:54:30.845709   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 1/120
	I0313 23:54:31.847156   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 2/120
	I0313 23:54:32.849356   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 3/120
	I0313 23:54:33.850713   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 4/120
	I0313 23:54:34.852594   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 5/120
	I0313 23:54:35.854027   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 6/120
	I0313 23:54:36.855798   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 7/120
	I0313 23:54:37.857732   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 8/120
	I0313 23:54:38.859388   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 9/120
	I0313 23:54:39.861464   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 10/120
	I0313 23:54:40.862809   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 11/120
	I0313 23:54:41.864370   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 12/120
	I0313 23:54:42.865750   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 13/120
	I0313 23:54:43.867178   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 14/120
	I0313 23:54:44.868976   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 15/120
	I0313 23:54:45.870565   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 16/120
	I0313 23:54:46.871956   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 17/120
	I0313 23:54:47.873641   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 18/120
	I0313 23:54:48.875098   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 19/120
	I0313 23:54:49.877063   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 20/120
	I0313 23:54:50.878604   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 21/120
	I0313 23:54:51.880023   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 22/120
	I0313 23:54:52.881557   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 23/120
	I0313 23:54:53.883048   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 24/120
	I0313 23:54:54.884817   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 25/120
	I0313 23:54:55.886197   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 26/120
	I0313 23:54:56.887726   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 27/120
	I0313 23:54:57.889161   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 28/120
	I0313 23:54:58.890613   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 29/120
	I0313 23:54:59.892101   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 30/120
	I0313 23:55:00.893618   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 31/120
	I0313 23:55:01.895209   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 32/120
	I0313 23:55:02.896807   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 33/120
	I0313 23:55:03.898777   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 34/120
	I0313 23:55:04.900453   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 35/120
	I0313 23:55:05.902103   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 36/120
	I0313 23:55:06.903446   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 37/120
	I0313 23:55:07.905082   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 38/120
	I0313 23:55:08.906295   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 39/120
	I0313 23:55:09.908131   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 40/120
	I0313 23:55:10.909681   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 41/120
	I0313 23:55:11.911165   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 42/120
	I0313 23:55:12.913110   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 43/120
	I0313 23:55:13.914569   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 44/120
	I0313 23:55:14.916298   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 45/120
	I0313 23:55:15.917711   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 46/120
	I0313 23:55:16.919104   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 47/120
	I0313 23:55:17.920586   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 48/120
	I0313 23:55:18.921975   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 49/120
	I0313 23:55:19.923817   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 50/120
	I0313 23:55:20.925295   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 51/120
	I0313 23:55:21.926797   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 52/120
	I0313 23:55:22.928539   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 53/120
	I0313 23:55:23.930375   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 54/120
	I0313 23:55:24.931865   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 55/120
	I0313 23:55:25.933593   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 56/120
	I0313 23:55:26.935009   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 57/120
	I0313 23:55:27.937274   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 58/120
	I0313 23:55:28.938820   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 59/120
	I0313 23:55:29.940536   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 60/120
	I0313 23:55:30.941955   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 61/120
	I0313 23:55:31.943360   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 62/120
	I0313 23:55:32.944828   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 63/120
	I0313 23:55:33.946063   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 64/120
	I0313 23:55:34.948500   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 65/120
	I0313 23:55:35.949773   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 66/120
	I0313 23:55:36.951294   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 67/120
	I0313 23:55:37.952506   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 68/120
	I0313 23:55:38.954673   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 69/120
	I0313 23:55:39.956415   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 70/120
	I0313 23:55:40.957956   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 71/120
	I0313 23:55:41.959277   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 72/120
	I0313 23:55:42.960718   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 73/120
	I0313 23:55:43.961932   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 74/120
	I0313 23:55:44.963798   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 75/120
	I0313 23:55:45.965161   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 76/120
	I0313 23:55:46.966789   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 77/120
	I0313 23:55:47.968270   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 78/120
	I0313 23:55:48.969655   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 79/120
	I0313 23:55:49.971426   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 80/120
	I0313 23:55:50.973045   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 81/120
	I0313 23:55:51.974530   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 82/120
	I0313 23:55:52.975914   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 83/120
	I0313 23:55:53.977535   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 84/120
	I0313 23:55:54.979181   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 85/120
	I0313 23:55:55.980577   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 86/120
	I0313 23:55:56.982183   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 87/120
	I0313 23:55:57.983617   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 88/120
	I0313 23:55:58.985098   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 89/120
	I0313 23:55:59.987157   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 90/120
	I0313 23:56:00.989393   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 91/120
	I0313 23:56:01.990927   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 92/120
	I0313 23:56:02.992379   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 93/120
	I0313 23:56:03.993936   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 94/120
	I0313 23:56:04.995719   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 95/120
	I0313 23:56:05.997126   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 96/120
	I0313 23:56:06.998493   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 97/120
	I0313 23:56:07.999939   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 98/120
	I0313 23:56:09.001485   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 99/120
	I0313 23:56:10.003850   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 100/120
	I0313 23:56:11.005182   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 101/120
	I0313 23:56:12.006735   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 102/120
	I0313 23:56:13.008219   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 103/120
	I0313 23:56:14.009477   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 104/120
	I0313 23:56:15.011516   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 105/120
	I0313 23:56:16.012774   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 106/120
	I0313 23:56:17.014884   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 107/120
	I0313 23:56:18.016441   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 108/120
	I0313 23:56:19.017712   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 109/120
	I0313 23:56:20.019235   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 110/120
	I0313 23:56:21.020732   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 111/120
	I0313 23:56:22.022942   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 112/120
	I0313 23:56:23.025343   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 113/120
	I0313 23:56:24.026550   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 114/120
	I0313 23:56:25.028281   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 115/120
	I0313 23:56:26.029642   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 116/120
	I0313 23:56:27.031140   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 117/120
	I0313 23:56:28.032607   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 118/120
	I0313 23:56:29.034057   28027 main.go:141] libmachine: (ha-504633-m03) Waiting for machine to stop 119/120
	I0313 23:56:30.034941   28027 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0313 23:56:30.035001   28027 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0313 23:56:30.037157   28027 out.go:177] 
	W0313 23:56:30.038475   28027 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0313 23:56:30.038488   28027 out.go:239] * 
	* 
	W0313 23:56:30.040616   28027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0313 23:56:30.042088   28027 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-504633 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-504633 --wait=true -v=7 --alsologtostderr
E0313 23:58:36.335185   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:59:44.448532   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-504633 --wait=true -v=7 --alsologtostderr: (4m12.874577301s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-504633
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 logs -n 25: (2.008659228s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m04 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp testdata/cp-test.txt                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m03 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-504633 node stop m02 -v=7                                                     | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-504633 node start m02 -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633 -v=7                                                           | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-504633 -v=7                                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-504633 --wait=true -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:56 UTC | 14 Mar 24 00:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633                                                                | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:00 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:56:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:56:30.098794   28409 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:56:30.098914   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.098923   28409 out.go:304] Setting ErrFile to fd 2...
	I0313 23:56:30.098928   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.099134   28409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:56:30.099654   28409 out.go:298] Setting JSON to false
	I0313 23:56:30.100577   28409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2333,"bootTime":1710371857,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:56:30.100637   28409 start.go:139] virtualization: kvm guest
	I0313 23:56:30.103023   28409 out.go:177] * [ha-504633] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:56:30.104427   28409 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:56:30.105802   28409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:56:30.104443   28409 notify.go:220] Checking for updates...
	I0313 23:56:30.108628   28409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:56:30.109948   28409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:56:30.111538   28409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:56:30.112884   28409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:56:30.114617   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.114710   28409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:56:30.115158   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.115192   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.130073   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0313 23:56:30.130476   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.131066   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.131089   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.131384   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.131578   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.165833   28409 out.go:177] * Using the kvm2 driver based on existing profile
	I0313 23:56:30.167022   28409 start.go:297] selected driver: kvm2
	I0313 23:56:30.167035   28409 start.go:901] validating driver "kvm2" against &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.167185   28409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:56:30.167473   28409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.167555   28409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:56:30.182038   28409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:56:30.182685   28409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:56:30.182714   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:56:30.182718   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:56:30.182803   28409 start.go:340] cluster config:
	{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.182925   28409 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.185191   28409 out.go:177] * Starting "ha-504633" primary control-plane node in "ha-504633" cluster
	I0313 23:56:30.186941   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:56:30.186991   28409 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:56:30.187001   28409 cache.go:56] Caching tarball of preloaded images
	I0313 23:56:30.187073   28409 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:56:30.187091   28409 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:56:30.187207   28409 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:56:30.187388   28409 start.go:360] acquireMachinesLock for ha-504633: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:56:30.187433   28409 start.go:364] duration metric: took 28.831µs to acquireMachinesLock for "ha-504633"
	I0313 23:56:30.187447   28409 start.go:96] Skipping create...Using existing machine configuration
	I0313 23:56:30.187454   28409 fix.go:54] fixHost starting: 
	I0313 23:56:30.187701   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.187742   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.201690   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0313 23:56:30.202140   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.202610   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.202628   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.203018   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.203197   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.203351   28409 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:56:30.204871   28409 fix.go:112] recreateIfNeeded on ha-504633: state=Running err=<nil>
	W0313 23:56:30.204890   28409 fix.go:138] unexpected machine state, will restart: <nil>
	I0313 23:56:30.206803   28409 out.go:177] * Updating the running kvm2 "ha-504633" VM ...
	I0313 23:56:30.207965   28409 machine.go:94] provisionDockerMachine start ...
	I0313 23:56:30.207984   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.208167   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.210512   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.210996   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.211031   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.211147   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.211321   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211470   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211605   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.211757   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.211986   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.212003   28409 main.go:141] libmachine: About to run SSH command:
	hostname
	I0313 23:56:30.328432   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.328460   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328737   28409 buildroot.go:166] provisioning hostname "ha-504633"
	I0313 23:56:30.328768   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328970   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.331435   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.331897   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.331929   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.332007   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.332203   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332380   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332532   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.332676   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.332881   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.332904   28409 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633 && echo "ha-504633" | sudo tee /etc/hostname
	I0313 23:56:30.464341   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.464368   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.467065   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467483   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.467515   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467715   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.467914   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468065   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468194   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.468333   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.468502   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.468524   28409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:56:30.584360   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:56:30.584396   28409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:56:30.584420   28409 buildroot.go:174] setting up certificates
	I0313 23:56:30.584430   28409 provision.go:84] configureAuth start
	I0313 23:56:30.584438   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.584756   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:56:30.587336   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587798   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.587826   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587958   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.590133   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590486   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.590511   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590669   28409 provision.go:143] copyHostCerts
	I0313 23:56:30.590701   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590755   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:56:30.590781   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590859   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:56:30.590971   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.590997   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:56:30.591003   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.591041   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:56:30.591114   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591140   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:56:30.591146   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591179   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:56:30.591247   28409 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633 san=[127.0.0.1 192.168.39.31 ha-504633 localhost minikube]
	I0313 23:56:30.693441   28409 provision.go:177] copyRemoteCerts
	I0313 23:56:30.693505   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:56:30.693564   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.696012   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696413   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.696440   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696627   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.696839   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.697011   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.697175   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:56:30.785650   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:56:30.785717   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0313 23:56:30.817216   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:56:30.817299   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:56:30.853125   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:56:30.853195   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:56:30.881900   28409 provision.go:87] duration metric: took 297.459041ms to configureAuth
	I0313 23:56:30.881929   28409 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:56:30.882126   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.882189   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.884828   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885248   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.885279   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885467   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.885658   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885801   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885941   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.886061   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.886259   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.886275   28409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:58:01.808461   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:58:01.808491   28409 machine.go:97] duration metric: took 1m31.600511132s to provisionDockerMachine
	I0313 23:58:01.808508   28409 start.go:293] postStartSetup for "ha-504633" (driver="kvm2")
	I0313 23:58:01.808522   28409 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:58:01.808543   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.808861   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:58:01.808887   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.812149   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812576   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.812605   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812815   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.813014   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.813193   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.813334   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:01.903105   28409 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:58:01.907651   28409 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:58:01.907680   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:58:01.907783   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:58:01.907865   28409 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:58:01.907876   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:58:01.907960   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:58:01.919465   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:01.946408   28409 start.go:296] duration metric: took 137.888217ms for postStartSetup
	I0313 23:58:01.946446   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.946781   28409 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0313 23:58:01.946811   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.949427   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.949914   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.949935   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.950107   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.950318   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.950518   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.950688   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	W0313 23:58:02.037663   28409 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0313 23:58:02.037686   28409 fix.go:56] duration metric: took 1m31.850231206s for fixHost
	I0313 23:58:02.037711   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.040343   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040708   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.040738   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040849   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.041044   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041210   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041348   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.041514   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:58:02.041672   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:58:02.041682   28409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:58:02.155870   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710374282.116750971
	
	I0313 23:58:02.155898   28409 fix.go:216] guest clock: 1710374282.116750971
	I0313 23:58:02.155910   28409 fix.go:229] Guest: 2024-03-13 23:58:02.116750971 +0000 UTC Remote: 2024-03-13 23:58:02.037694094 +0000 UTC m=+91.985482062 (delta=79.056877ms)
	I0313 23:58:02.155974   28409 fix.go:200] guest clock delta is within tolerance: 79.056877ms
	I0313 23:58:02.155983   28409 start.go:83] releasing machines lock for "ha-504633", held for 1m31.968539762s
	I0313 23:58:02.156015   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.156280   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:02.158806   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159205   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.159237   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159370   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160006   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160181   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160247   28409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:58:02.160291   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.160405   28409 ssh_runner.go:195] Run: cat /version.json
	I0313 23:58:02.160429   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.162810   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163073   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163115   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163140   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163246   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163435   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.163505   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163525   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163591   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.163741   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163819   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.163890   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.164013   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.164150   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.244724   28409 ssh_runner.go:195] Run: systemctl --version
	I0313 23:58:02.281104   28409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:58:02.443505   28409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:58:02.454543   28409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:58:02.454609   28409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:58:02.464849   28409 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0313 23:58:02.464876   28409 start.go:494] detecting cgroup driver to use...
	I0313 23:58:02.464929   28409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:58:02.482057   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:58:02.496724   28409 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:58:02.496794   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:58:02.511697   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:58:02.527065   28409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:58:02.681040   28409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:58:02.835362   28409 docker.go:233] disabling docker service ...
	I0313 23:58:02.835438   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:58:02.854015   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:58:02.870563   28409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:58:03.023394   28409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:58:03.174638   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:58:03.190413   28409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:58:03.211721   28409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:58:03.211780   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.222878   28409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:58:03.222942   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.233630   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.244322   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.255468   28409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:58:03.267600   28409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:58:03.277642   28409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:58:03.287571   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:03.439510   28409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:58:03.748831   28409 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:58:03.748906   28409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:58:03.756137   28409 start.go:562] Will wait 60s for crictl version
	I0313 23:58:03.756204   28409 ssh_runner.go:195] Run: which crictl
	I0313 23:58:03.760744   28409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:58:03.805526   28409 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:58:03.805610   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.836970   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.869816   28409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:58:03.871315   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:03.873980   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874401   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:03.874426   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874660   28409 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:58:03.879849   28409 kubeadm.go:877] updating cluster {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:58:03.880030   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:58:03.880092   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.930042   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.930078   28409 crio.go:415] Images already preloaded, skipping extraction
	I0313 23:58:03.930134   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.969471   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.969495   28409 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:58:03.969505   28409 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0313 23:58:03.969619   28409 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:58:03.969719   28409 ssh_runner.go:195] Run: crio config
	I0313 23:58:04.017739   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:58:04.017763   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:58:04.017775   28409 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:58:04.017804   28409 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-504633 NodeName:ha-504633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:58:04.017946   28409 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-504633"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:58:04.017969   28409 kube-vip.go:105] generating kube-vip config ...
	I0313 23:58:04.018032   28409 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:58:04.018085   28409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:58:04.028452   28409 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:58:04.028565   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0313 23:58:04.038875   28409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0313 23:58:04.057207   28409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:58:04.076202   28409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0313 23:58:04.094416   28409 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:58:04.112686   28409 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:58:04.118169   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:04.271186   28409 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:58:04.288068   28409 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.31
	I0313 23:58:04.288091   28409 certs.go:194] generating shared ca certs ...
	I0313 23:58:04.288105   28409 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.288255   28409 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:58:04.288306   28409 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:58:04.288320   28409 certs.go:256] generating profile certs ...
	I0313 23:58:04.288406   28409 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:58:04.288441   28409 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10
	I0313 23:58:04.288463   28409 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.156 192.168.39.254]
	I0313 23:58:04.453092   28409 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 ...
	I0313 23:58:04.453124   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10: {Name:mk7f4dfb8ffb67726421360a0ca328ea06182ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453293   28409 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 ...
	I0313 23:58:04.453304   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10: {Name:mkbf58ff48cd95f35e326039dbd8db4c6d576092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453372   28409 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:58:04.453516   28409 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:58:04.453663   28409 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:58:04.453679   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:58:04.453691   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:58:04.453702   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:58:04.453719   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:58:04.453730   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:58:04.453740   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:58:04.453749   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:58:04.453760   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:58:04.453819   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:58:04.453846   28409 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:58:04.453853   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:58:04.453871   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:58:04.453894   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:58:04.453914   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:58:04.453947   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:04.453974   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.453986   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.453998   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.454536   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:58:04.484150   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:58:04.511621   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:58:04.537753   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:58:04.563144   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0313 23:58:04.589432   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0313 23:58:04.614262   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:58:04.640703   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:58:04.666472   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:58:04.694051   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:58:04.720108   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:58:04.745795   28409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:58:04.764781   28409 ssh_runner.go:195] Run: openssl version
	I0313 23:58:04.771274   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:58:04.782595   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787475   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787534   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.793412   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:58:04.803571   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:58:04.815663   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820695   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820752   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.827074   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:58:04.837326   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:58:04.849315   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854566   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854629   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.860986   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:58:04.871499   28409 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:58:04.877769   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0313 23:58:04.884080   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0313 23:58:04.890339   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0313 23:58:04.896291   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0313 23:58:04.902256   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0313 23:58:04.908159   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0313 23:58:04.914108   28409 kubeadm.go:391] StartCluster: {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:58:04.914211   28409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:58:04.914255   28409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:58:05.005477   28409 cri.go:89] found id: "156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	I0313 23:58:05.005504   28409 cri.go:89] found id: "997c2a0595975aac0fa1f4e2f4ed2b071768dbbe122a24a9ace7bcddac59a574"
	I0313 23:58:05.005508   28409 cri.go:89] found id: "705a44943e5ae9684327019d5cba671d9e6fc4baa380fc53f9177b6231975ffb"
	I0313 23:58:05.005511   28409 cri.go:89] found id: "d6dc521bb48cc0b39badfba80b2def42ad744f06beeca9bacdced9693d0c4531"
	I0313 23:58:05.005514   28409 cri.go:89] found id: "b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	I0313 23:58:05.005517   28409 cri.go:89] found id: "aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a"
	I0313 23:58:05.005519   28409 cri.go:89] found id: "91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025"
	I0313 23:58:05.005521   28409 cri.go:89] found id: "cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d"
	I0313 23:58:05.005524   28409 cri.go:89] found id: "b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c"
	I0313 23:58:05.005528   28409 cri.go:89] found id: "ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a"
	I0313 23:58:05.005531   28409 cri.go:89] found id: "ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714"
	I0313 23:58:05.005535   28409 cri.go:89] found id: "03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33"
	I0313 23:58:05.005538   28409 cri.go:89] found id: "f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1"
	I0313 23:58:05.005540   28409 cri.go:89] found id: "581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9"
	I0313 23:58:05.005553   28409 cri.go:89] found id: ""
	I0313 23:58:05.005637   28409 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.770168030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68b9d921-203e-4e5a-87d2-230a6b8fee56 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.771603826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d33a94f9-5df0-41d1-a07e-e0acaae49c6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.772335207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374443772307884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d33a94f9-5df0-41d1-a07e-e0acaae49c6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.773206739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d92a85f-4a6f-41bd-97c8-02f2a28d1f90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.773327285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d92a85f-4a6f-41bd-97c8-02f2a28d1f90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.773829266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3
d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e97
12529e4f0da45e69d85ca4208e58b8c051827bb,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\
"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kuber
netes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d92a85f-4a6f-41bd-97c8-02f2a28d1f90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.840712446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f416008f-4da9-4231-ae85-da79aac85597 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.840823823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f416008f-4da9-4231-ae85-da79aac85597 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.848714944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf1a4546-bce1-4e5e-931b-296c55c752b0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.850937082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374443850902049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf1a4546-bce1-4e5e-931b-296c55c752b0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.851951342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa054b6f-d63d-4937-8d81-71616ad95b75 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.852072482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa054b6f-d63d-4937-8d81-71616ad95b75 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.853020670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3
d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e97
12529e4f0da45e69d85ca4208e58b8c051827bb,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\
"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kuber
netes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa054b6f-d63d-4937-8d81-71616ad95b75 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.864912793Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=714ad2c7-6586-4a95-8604-de133a7d7c30 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.865337454Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-dx92g,Uid:e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374322937529142,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:49:49.740105671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dbkfv,Uid:bb55bb86-7637-4571-af89-55b34361d46f,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1710374289281645952,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.357252352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&PodSandboxMetadata{Name:kube-proxy-j56zl,Uid:9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289244788378,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kube
rnetes.io/config.seen: 2024-03-13T23:45:28.468943425Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&PodSandboxMetadata{Name:etcd-ha-504633,Uid:800b1d8694f42b67376c6e23b8dd8603,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289244187784,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.31:2379,kubernetes.io/config.hash: 800b1d8694f42b67376c6e23b8dd8603,kubernetes.io/config.seen: 2024-03-13T23:45:16.769893504Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&PodSandboxMetadata{Name:kindnet-8kvnb,Uid:b356234a-5293-417c-b78f
-8d532dfe1bc1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289243248100,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:28.470542470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-504633,Uid:c67e920ab8fd05e2d7c9a70920aeb5b4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289240860221,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70
920aeb5b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c67e920ab8fd05e2d7c9a70920aeb5b4,kubernetes.io/config.seen: 2024-03-13T23:45:16.769896715Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-504633,Uid:00cdbdbd1a1d0aefa499a886ae738c0a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289231429340,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.31:8443,kubernetes.io/config.hash: 00cdbdbd1a1d0aefa499a886ae738c0a,kubernetes.io/config.seen: 2024-03-13T23:45:16.769894889Z,kubernetes.io/config.source: file,},RuntimeHa
ndler:,},&PodSandbox{Id:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-504633,Uid:e8a4476828b7f0f0c95498e085ba5df9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289190793195,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e8a4476828b7f0f0c95498e085ba5df9,kubernetes.io/config.seen: 2024-03-13T23:45:16.769896048Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0e57f625-8927-418c-bdf2-9022439f858c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289164232203,Labels:map[str
ing]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernete
s.io/config.seen: 2024-03-13T23:45:34.362520373Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-504633,Uid:b9f7ed25c0cb42b2cf61135e6a1c245f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374289159247678,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{kubernetes.io/config.hash: b9f7ed25c0cb42b2cf61135e6a1c245f,kubernetes.io/config.seen: 2024-03-13T23:45:16.769890119Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hh2kw,Uid:ac81d022-8c47-4f99-8a34-bb4f73ead561,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710374285007723962,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.345384580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-dx92g,Uid:e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373790958231932,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:49:49.740105671Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},&PodSandbox{Id:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dbkfv,Uid:bb55bb86-7637-4571-af89-55b34361d46f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373534672529565,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.357252352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hh2kw,Uid:ac81d022-8c47-4f99-8a34-bb4f73ead561,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373534652530074,Labels:map[string]string{io.kubernetes.container.name: POD,io
.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:34.345384580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&PodSandboxMetadata{Name:kube-proxy-j56zl,Uid:9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373529406724107,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-13T23:45:28.468943425Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-504633,Uid:b9f7ed25c0cb42b2cf61135e6a1c245f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373509387120801,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{kubernetes.io/config.hash: b9f7ed25c0cb42b2cf61135e6a1c245f,kubernetes.io/config.seen: 2024-03-13T23:45:08.911349302Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&PodSandboxMetadata{Name:etcd-ha-504633,Uid:800b1d8694f42b67376c6e23b8dd8603,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373509385548633,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-50
4633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.31:2379,kubernetes.io/config.hash: 800b1d8694f42b67376c6e23b8dd8603,kubernetes.io/config.seen: 2024-03-13T23:45:08.911342636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-504633,Uid:c67e920ab8fd05e2d7c9a70920aeb5b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710373509378898250,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c67e920ab8fd05e2d7c9a70920aeb5b4,kubernetes.io/c
onfig.seen: 2024-03-13T23:45:08.911348659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=714ad2c7-6586-4a95-8604-de133a7d7c30 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.866555554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d2abe0b-cb17-45a4-b886-098ccf2f7963 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.866734908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d2abe0b-cb17-45a4-b886-098ccf2f7963 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.867388595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3
d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e97
12529e4f0da45e69d85ca4208e58b8c051827bb,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\
"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kuber
netes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d2abe0b-cb17-45a4-b886-098ccf2f7963 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.912293985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21f602ee-cc57-4335-b558-65a0b03c893e name=/runtime.v1.RuntimeService/Version
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.912398163Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21f602ee-cc57-4335-b558-65a0b03c893e name=/runtime.v1.RuntimeService/Version
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.914387946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eff89644-473c-4231-903b-2e556ad16c93 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.915324062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374443915297258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eff89644-473c-4231-903b-2e556ad16c93 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.916178250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82d75b32-4163-4ef7-83cc-621d2e636c3b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.916371425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82d75b32-4163-4ef7-83cc-621d2e636c3b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:00:43 ha-504633 crio[4294]: time="2024-03-14 00:00:43.916789594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3
d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e97
12529e4f0da45e69d85ca4208e58b8c051827bb,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\
"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kuber
netes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82d75b32-4163-4ef7-83cc-621d2e636c3b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	32eccfe2db8bf       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   0ae4db3977043       kindnet-8kvnb
	2ad93782f06ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   24d8f48eddc11       storage-provisioner
	6c12af0f98a84       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   7114523c0a886       kube-controller-manager-ha-504633
	0bb4395e019a7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   3a02e247a65fa       kube-apiserver-ha-504633
	d5b6800024430       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   1833af16e7cfa       busybox-5b5d89c9d6-dx92g
	365fcf57ea467       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   88f517fd35061       kube-proxy-j56zl
	a733ab586b563       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   0ae4db3977043       kindnet-8kvnb
	be3d6f776a6b2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   3a02e247a65fa       kube-apiserver-ha-504633
	a6ed23280f4a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   d374e5b744b40       coredns-5dd5756b68-dbkfv
	28e15c659f106       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   adde121b4482d       etcd-ha-504633
	597de64e318a0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   64b3632a81b1a       kube-scheduler-ha-504633
	e53161751ea00       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   7114523c0a886       kube-controller-manager-ha-504633
	b964950d4816e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   24d8f48eddc11       storage-provisioner
	a32ba91e1ce55       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   2e1ee02dfee79       coredns-5dd5756b68-hh2kw
	156780ad31a1b       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago        Exited              kube-vip                  8                   6664331d2d846       kube-vip-ha-504633
	3e670be31d057       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   44694d6d0ddb1       busybox-5b5d89c9d6-dx92g
	91c5fdb6071ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 minutes ago       Exited              coredns                   0                   ac06f7523df34       coredns-5dd5756b68-dbkfv
	cea68e46e7574       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 minutes ago       Exited              coredns                   0                   99eec3703a3ac       coredns-5dd5756b68-hh2kw
	ce0dc1e514cfe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      15 minutes ago       Exited              kube-proxy                0                   508491d3a970a       kube-proxy-j56zl
	ec04eb9f36ad1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      15 minutes ago       Exited              etcd                      0                   2e892e8826932       etcd-ha-504633
	03595624eed74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      15 minutes ago       Exited              kube-scheduler            0                   e5651d5d4cdf1       kube-scheduler-ha-504633
	
	
	==> coredns [91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025] <==
	[INFO] 10.244.2.2:45263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00157594s
	[INFO] 10.244.2.2:56184 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095082s
	[INFO] 10.244.2.2:38062 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145314s
	[INFO] 10.244.2.2:47535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099682s
	[INFO] 10.244.1.2:38146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248518s
	[INFO] 10.244.1.2:54521 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160289s
	[INFO] 10.244.1.2:34985 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396473s
	[INFO] 10.244.1.2:37504 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127175s
	[INFO] 10.244.1.2:47786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089644s
	[INFO] 10.244.0.4:42865 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167315s
	[INFO] 10.244.2.2:37374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167385s
	[INFO] 10.244.2.2:33251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009522s
	[INFO] 10.244.1.2:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158704s
	[INFO] 10.244.1.2:36398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143215s
	[INFO] 10.244.1.2:60528 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012073s
	[INFO] 10.244.1.2:45057 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013653s
	[INFO] 10.244.0.4:55605 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153423s
	[INFO] 10.244.1.2:37595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218212s
	[INFO] 10.244.1.2:45054 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155156s
	[INFO] 10.244.1.2:45734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159775s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46251 - 3158 "HINFO IN 4020314174239755005.6788368900148723181. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006083937s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39174 - 27518 "HINFO IN 5234567318487603077.6782029109910001331. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009507212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d] <==
	[INFO] 10.244.0.4:36734 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153111s
	[INFO] 10.244.0.4:36918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002576888s
	[INFO] 10.244.2.2:52506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216481s
	[INFO] 10.244.2.2:41181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142291s
	[INFO] 10.244.1.2:41560 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185807s
	[INFO] 10.244.1.2:34843 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104567s
	[INFO] 10.244.1.2:36490 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226318s
	[INFO] 10.244.0.4:60091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107953s
	[INFO] 10.244.0.4:37327 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151724s
	[INFO] 10.244.0.4:35399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043972s
	[INFO] 10.244.2.2:59809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090745s
	[INFO] 10.244.2.2:40239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069623s
	[INFO] 10.244.0.4:36867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127937s
	[INFO] 10.244.0.4:35854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195121s
	[INFO] 10.244.0.4:56742 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109765s
	[INFO] 10.244.2.2:33696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132875s
	[INFO] 10.244.2.2:51474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149174s
	[INFO] 10.244.2.2:58642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010185s
	[INFO] 10.244.2.2:58203 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089769s
	[INFO] 10.244.1.2:54587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118471s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-504633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:00:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    ha-504633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13fd8f4b90794ddf8d3d6bdb9051c529
	  System UUID:                13fd8f4b-9079-4ddf-8d3d-6bdb9051c529
	  Boot ID:                    83daf814-565c-4717-8930-43f7c53558eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dx92g             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-dbkfv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-hh2kw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-504633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-8kvnb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-504633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-504633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-j56zl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-504633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-504633                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 111s               kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   NodeReady                15m                kubelet          Node ha-504633 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Warning  ContainerGCFailed        3m28s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           104s               node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           32s                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	
	
	Name:               ha-504633-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:47:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:00:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-504633-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6ba1a02ba14580ac16771f2b426854
	  System UUID:                5f6ba1a0-2ba1-4580-ac16-771f2b426854
	  Boot ID:                    213c5b73-5c4f-4560-89e6-87c5c4535369
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zfjjt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-504633-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-f4pz8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-504633-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-504633-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-4s9t5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-504633-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-504633-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 83s                    kube-proxy       
	  Normal  RegisteredNode           13m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  NodeNotReady             9m1s                   node-controller  Node ha-504633-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node ha-504633-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                   node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	
	
	Name:               ha-504633-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_49_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:49:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:00:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:00:13 +0000   Wed, 13 Mar 2024 23:59:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:00:13 +0000   Wed, 13 Mar 2024 23:59:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:00:13 +0000   Wed, 13 Mar 2024 23:59:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:00:13 +0000   Wed, 13 Mar 2024 23:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-504633-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8771be2dfdfd44f18d592fcb20bb5a4c
	  System UUID:                8771be2d-fdfd-44f1-8d59-2fcb20bb5a4c
	  Boot ID:                    008351f2-a93a-4879-a8be-4d45aefe9d06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-prmkb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-504633-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-5gfqz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-504633-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-504633-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-fgcxp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-504633-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-504633-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 37s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-504633-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-504633-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-504633-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-504633-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-504633-m03 has been rebooted, boot id: 008351f2-a93a-4879-a8be-4d45aefe9d06
	  Normal   NodeReady                62s                kubelet          Node ha-504633-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-504633-m03 event: Registered Node ha-504633-m03 in Controller
	
	
	Name:               ha-504633-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:00:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:00:35 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:00:35 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:00:35 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:00:35 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-504633-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d985b67edcea4528bf49bb9fe5eeb65e
	  System UUID:                d985b67e-dcea-4528-bf49-bb9fe5eeb65e
	  Boot ID:                    9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dn6gl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-7hr7b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeReady                9m59s              kubelet          Node ha-504633-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-504633-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-504633-m04 has been rebooted, boot id: 9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-504633-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.715783] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.171003] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142829] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.235386] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Mar13 23:45] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.057845] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.706645] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.862236] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.155181] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.379152] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[ +12.986535] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.322868] kauditd_printk_skb: 43 callbacks suppressed
	[Mar13 23:46] kauditd_printk_skb: 27 callbacks suppressed
	[Mar13 23:58] systemd-fstab-generator[4216]: Ignoring "noauto" option for root device
	[  +0.153832] systemd-fstab-generator[4228]: Ignoring "noauto" option for root device
	[  +0.187781] systemd-fstab-generator[4242]: Ignoring "noauto" option for root device
	[  +0.149225] systemd-fstab-generator[4254]: Ignoring "noauto" option for root device
	[  +0.268609] systemd-fstab-generator[4278]: Ignoring "noauto" option for root device
	[  +0.830313] systemd-fstab-generator[4380]: Ignoring "noauto" option for root device
	[  +5.053578] kauditd_printk_skb: 132 callbacks suppressed
	[  +7.758209] kauditd_printk_skb: 80 callbacks suppressed
	[ +34.892345] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.328226] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51] <==
	{"level":"warn","ts":"2024-03-13T23:59:36.295208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:59:36.395475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:59:36.495171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c25b0656f1ce3d71","from":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-13T23:59:38.854468Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.156:2380/version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:38.854547Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:40.923605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:40.931893Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:42.857463Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.156:2380/version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:42.857667Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:45.923911Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:45.932125Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:46.862434Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.156:2380/version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:46.862542Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:50.864019Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.156:2380/version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:50.86422Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8a6ebbe0b7bc25b1","error":"Get \"https://192.168.39.156:2380/version\": dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:50.924864Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-13T23:59:50.933375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a6ebbe0b7bc25b1","rtt":"0s","error":"dial tcp 192.168.39.156:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-13T23:59:53.825578Z","caller":"traceutil/trace.go:171","msg":"trace[269743063] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"165.628751ms","start":"2024-03-13T23:59:53.659924Z","end":"2024-03-13T23:59:53.825552Z","steps":["trace[269743063] 'process raft request'  (duration: 165.485531ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:59:54.1971Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-13T23:59:54.197206Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.197253Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.214156Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.218035Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-13T23:59:54.218084Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.220119Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	
	
	==> etcd [ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714] <==
	{"level":"info","ts":"2024-03-13T23:56:31.030267Z","caller":"traceutil/trace.go:171","msg":"trace[82422090] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"146.582127ms","start":"2024-03-13T23:56:30.883677Z","end":"2024-03-13T23:56:31.03026Z","steps":["trace[82422090] 'agreement among raft nodes before linearized reading'  (duration: 133.285904ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.030362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.824463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-13T23:56:31.030376Z","caller":"traceutil/trace.go:171","msg":"trace[30097541] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"117.845034ms","start":"2024-03-13T23:56:30.912526Z","end":"2024-03-13T23:56:31.030371Z","steps":["trace[30097541] 'agreement among raft nodes before linearized reading'  (duration: 117.823726ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.046603Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-13T23:56:31.046656Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-13T23:56:31.046727Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c25b0656f1ce3d71","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-13T23:56:31.046878Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.04692Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047134Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047383Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047473Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047537Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047579Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047588Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047598Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.04764Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047766Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047855Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.051496Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051611Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051643Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-504633","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.31:2380"],"advertise-client-urls":["https://192.168.39.31:2379"]}
	
	
	==> kernel <==
	 00:00:44 up 16 min,  0 users,  load average: 0.64, 0.53, 0.38
	Linux ha-504633 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48] <==
	I0314 00:00:05.851826       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:00:15.867917       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:00:15.868014       1 main.go:227] handling current node
	I0314 00:00:15.868035       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:00:15.868042       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:00:15.868236       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:00:15.868268       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0314 00:00:15.868338       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:00:15.868363       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:00:25.879355       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:00:25.879377       1 main.go:227] handling current node
	I0314 00:00:25.879387       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:00:25.879392       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:00:25.879531       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:00:25.879561       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0314 00:00:25.879649       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:00:25.879656       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:00:35.900064       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:00:35.900165       1 main.go:227] handling current node
	I0314 00:00:35.900179       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:00:35.900185       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:00:35.900894       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:00:35.900908       1 main.go:250] Node ha-504633-m03 has CIDR [10.244.2.0/24] 
	I0314 00:00:35.901063       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:00:35.901092       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f] <==
	I0313 23:58:10.556238       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0313 23:58:10.558062       1 main.go:107] hostIP = 192.168.39.31
	podIP = 192.168.39.31
	I0313 23:58:10.558267       1 main.go:116] setting mtu 1500 for CNI 
	I0313 23:58:10.561039       1 main.go:146] kindnetd IP family: "ipv4"
	I0313 23:58:10.561117       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0313 23:58:12.048699       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:15.120735       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:26.122664       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0313 23:58:30.480524       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:33.552553       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0] <==
	I0313 23:58:54.080722       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0313 23:58:54.080852       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0313 23:58:54.092228       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0313 23:58:54.092295       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0313 23:58:54.092382       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:54.092558       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:54.154312       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0313 23:58:54.165829       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0313 23:58:54.171516       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0313 23:58:54.171529       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0313 23:58:54.172185       1 shared_informer.go:318] Caches are synced for configmaps
	I0313 23:58:54.175389       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0313 23:58:54.175565       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0313 23:58:54.179153       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0313 23:58:54.188452       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.47]
	I0313 23:58:54.190384       1 controller.go:624] quota admission added evaluator for: endpoints
	I0313 23:58:54.192454       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0313 23:58:54.192514       1 aggregator.go:166] initial CRD sync complete...
	I0313 23:58:54.192536       1 autoregister_controller.go:141] Starting autoregister controller
	I0313 23:58:54.192544       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0313 23:58:54.192552       1 cache.go:39] Caches are synced for autoregister controller
	I0313 23:58:54.200231       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0313 23:58:54.207778       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0313 23:58:55.095788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0313 23:58:55.628745       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.31 192.168.39.47]
	
	
	==> kube-apiserver [be3d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25] <==
	I0313 23:58:10.587833       1 options.go:220] external host was not specified, using 192.168.39.31
	I0313 23:58:10.589141       1 server.go:148] Version: v1.28.4
	I0313 23:58:10.589190       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:10.960512       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0313 23:58:10.972325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0313 23:58:10.972408       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0313 23:58:10.972706       1 instance.go:298] Using reconciler: lease
	W0313 23:58:30.947173       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0313 23:58:30.952347       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0313 23:58:30.973636       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a] <==
	I0313 23:59:06.588502       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0313 23:59:06.619269       1 shared_informer.go:318] Caches are synced for endpoint
	I0313 23:59:06.619357       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0313 23:59:06.662041       1 shared_informer.go:318] Caches are synced for resource quota
	I0313 23:59:06.669050       1 shared_informer.go:318] Caches are synced for stateful set
	I0313 23:59:06.709104       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0313 23:59:06.733238       1 shared_informer.go:318] Caches are synced for resource quota
	I0313 23:59:06.749686       1 shared_informer.go:318] Caches are synced for daemon sets
	I0313 23:59:07.101174       1 shared_informer.go:318] Caches are synced for garbage collector
	I0313 23:59:07.145391       1 shared_informer.go:318] Caches are synced for garbage collector
	I0313 23:59:07.145438       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0313 23:59:11.071778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="78.624µs"
	I0313 23:59:14.712471       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0313 23:59:14.724951       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-79wx5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-79wx5\": the object has been modified; please apply your changes to the latest version and try again"
	I0313 23:59:14.726223       1 event.go:298] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0c68d020-8d76-43ff-a8dd-d30cf36099a6", APIVersion:"v1", ResourceVersion:"287", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-79wx5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-79wx5": the object has been modified; please apply your changes to the latest version and try again
	I0313 23:59:14.734872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.626658ms"
	I0313 23:59:14.735123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.184µs"
	I0313 23:59:24.238127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.382695ms"
	I0313 23:59:24.238451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.275µs"
	I0313 23:59:40.269837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.8631ms"
	I0313 23:59:40.270826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="67.713µs"
	I0313 23:59:43.528420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="69.485µs"
	I0314 00:00:04.699180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="37.278132ms"
	I0314 00:00:04.699498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.729µs"
	I0314 00:00:35.951443       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	
	
	==> kube-controller-manager [e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5] <==
	I0313 23:58:11.449358       1 serving.go:348] Generated self-signed cert in-memory
	I0313 23:58:11.925415       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0313 23:58:11.925496       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:11.927114       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:11.927301       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:11.928174       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0313 23:58:11.928326       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0313 23:58:31.981043       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.31:8443/healthz\": dial tcp 192.168.39.31:8443: connect: connection refused"
	
	
	==> kube-proxy [365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791] <==
	I0313 23:58:11.279414       1 server_others.go:69] "Using iptables proxy"
	E0313 23:58:12.945927       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:16.017290       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:19.090373       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:25.232466       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:34.449562       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:52.880584       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	I0313 23:58:52.883107       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0313 23:58:52.945876       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:58:52.945937       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:58:52.950525       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:58:52.951121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:58:52.952171       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:58:52.952216       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:52.957526       1 config.go:188] "Starting service config controller"
	I0313 23:58:52.957594       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:58:52.957651       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:58:52.957659       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:58:52.958592       1 config.go:315] "Starting node config controller"
	I0313 23:58:52.958637       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:58:54.958592       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0313 23:58:54.958700       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:58:54.958712       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a] <==
	I0313 23:45:29.711578       1 server_others.go:69] "Using iptables proxy"
	I0313 23:45:29.730452       1 node.go:141] Successfully retrieved node IP: 192.168.39.31
	I0313 23:45:29.778135       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:45:29.778173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:45:29.781710       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:45:29.782511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:45:29.782796       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:45:29.782835       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:45:29.784428       1 config.go:188] "Starting service config controller"
	I0313 23:45:29.785222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:45:29.785343       1 config.go:315] "Starting node config controller"
	I0313 23:45:29.785372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:45:29.785796       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:45:29.785829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:45:29.885734       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:45:29.885761       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:45:29.886938       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33] <==
	W0313 23:56:27.311602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:27.311659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:27.533068       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0313 23:56:27.533165       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0313 23:56:27.604660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:56:27.604838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:56:27.754398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0313 23:56:27.754565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0313 23:56:27.772480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0313 23:56:27.772646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0313 23:56:27.791220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0313 23:56:27.791266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0313 23:56:27.793169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0313 23:56:27.793208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0313 23:56:28.027210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.027234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.092424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.092523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.939272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0313 23:56:28.939349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0313 23:56:29.169752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0313 23:56:29.169860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0313 23:56:31.002106       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0313 23:56:31.002261       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0313 23:56:31.002463       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba] <==
	W0313 23:58:47.344743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.31:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:47.344933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.31:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:47.402616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.31:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:47.402681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.31:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:47.947622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:47.947699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.337094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.337168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.509652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.509716       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.904352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.904411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.047777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.047939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.433391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.433515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:50.731428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:50.731551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.059885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.060023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.726045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.726148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.991331       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.991397       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	I0313 23:59:07.387507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 13 23:59:15 ha-504633 kubelet[1439]: E0313 23:59:15.777901    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:59:16 ha-504633 kubelet[1439]: E0313 23:59:16.828413    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 13 23:59:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 13 23:59:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 13 23:59:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 13 23:59:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 13 23:59:28 ha-504633 kubelet[1439]: I0313 23:59:28.777548    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 13 23:59:28 ha-504633 kubelet[1439]: E0313 23:59:28.780468    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:59:39 ha-504633 kubelet[1439]: I0313 23:59:39.777427    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 13 23:59:39 ha-504633 kubelet[1439]: E0313 23:59:39.778259    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 13 23:59:52 ha-504633 kubelet[1439]: I0313 23:59:52.777038    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 13 23:59:52 ha-504633 kubelet[1439]: E0313 23:59:52.777910    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:04 ha-504633 kubelet[1439]: I0314 00:00:04.777282    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:04 ha-504633 kubelet[1439]: E0314 00:00:04.777683    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:16 ha-504633 kubelet[1439]: E0314 00:00:16.830917    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:00:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:00:19 ha-504633 kubelet[1439]: I0314 00:00:19.776622    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:19 ha-504633 kubelet[1439]: E0314 00:00:19.777499    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:30 ha-504633 kubelet[1439]: I0314 00:00:30.777532    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:30 ha-504633 kubelet[1439]: E0314 00:00:30.778147    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:42 ha-504633 kubelet[1439]: I0314 00:00:42.776943    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:42 ha-504633 kubelet[1439]: E0314 00:00:42.778456    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:00:43.383981   29475 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-504633 -n ha-504633
helpers_test.go:261: (dbg) Run:  kubectl --context ha-504633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (377.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (63.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 node delete m03 -v=7 --alsologtostderr: (26.547691989s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 2 (31.766964704s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:01:12.336752   29728 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:01:12.336943   29728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:01:12.336969   29728 out.go:304] Setting ErrFile to fd 2...
	I0314 00:01:12.336985   29728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:01:12.337470   29728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:01:12.337729   29728 out.go:298] Setting JSON to false
	I0314 00:01:12.337769   29728 mustload.go:65] Loading cluster: ha-504633
	I0314 00:01:12.337878   29728 notify.go:220] Checking for updates...
	I0314 00:01:12.338156   29728 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:01:12.338170   29728 status.go:255] checking status of ha-504633 ...
	I0314 00:01:12.338533   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:12.338580   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:12.357156   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0314 00:01:12.357659   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:12.358288   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:12.358310   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:12.358682   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:12.358966   29728 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0314 00:01:12.360852   29728 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0314 00:01:12.360870   29728 host.go:66] Checking if "ha-504633" exists ...
	I0314 00:01:12.361160   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:12.361203   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:12.376911   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0314 00:01:12.377414   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:12.377991   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:12.378021   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:12.378378   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:12.378581   29728 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0314 00:01:12.381209   29728 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:01:12.381624   29728 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0314 00:01:12.381662   29728 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:01:12.381823   29728 host.go:66] Checking if "ha-504633" exists ...
	I0314 00:01:12.382088   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:12.382128   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:12.396902   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0314 00:01:12.397374   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:12.397824   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:12.397847   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:12.398138   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:12.398337   29728 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0314 00:01:12.399042   29728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:01:12.399095   29728 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0314 00:01:12.401848   29728 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:01:12.402308   29728 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0314 00:01:12.402350   29728 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:01:12.402536   29728 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0314 00:01:12.402738   29728 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0314 00:01:12.402943   29728 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0314 00:01:12.403155   29728 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0314 00:01:12.495700   29728 ssh_runner.go:195] Run: systemctl --version
	I0314 00:01:12.505126   29728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:01:12.523873   29728 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0314 00:01:12.523901   29728 api_server.go:166] Checking apiserver status ...
	I0314 00:01:12.523933   29728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:01:12.544463   29728 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5487/cgroup
	W0314 00:01:12.555698   29728 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5487/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:01:12.555774   29728 ssh_runner.go:195] Run: ls
	I0314 00:01:12.561453   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:17.562269   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:17.562337   29728 retry.go:31] will retry after 257.20021ms: state is "Stopped"
	I0314 00:01:17.819716   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:22.820212   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:22.820270   29728 retry.go:31] will retry after 316.467601ms: state is "Stopped"
	I0314 00:01:23.137807   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:28.138913   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:28.138960   29728 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0314 00:01:28.138975   29728 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:01:28.139019   29728 status.go:255] checking status of ha-504633-m02 ...
	I0314 00:01:28.139356   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:28.139391   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:28.154075   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0314 00:01:28.154506   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:28.154971   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:28.155002   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:28.155318   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:28.155554   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0314 00:01:28.157424   29728 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0314 00:01:28.157442   29728 host.go:66] Checking if "ha-504633-m02" exists ...
	I0314 00:01:28.157724   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:28.157757   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:28.172042   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0314 00:01:28.172432   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:28.172887   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:28.172910   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:28.173291   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:28.173502   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0314 00:01:28.176565   29728 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:01:28.176979   29728 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:58:16 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0314 00:01:28.177013   29728 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:01:28.177193   29728 host.go:66] Checking if "ha-504633-m02" exists ...
	I0314 00:01:28.177474   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:28.177509   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:28.193754   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0314 00:01:28.194246   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:28.194730   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:28.194754   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:28.195089   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:28.195295   29728 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0314 00:01:28.195496   29728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:01:28.195515   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0314 00:01:28.197950   29728 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:01:28.198340   29728 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:58:16 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0314 00:01:28.198369   29728 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:01:28.198520   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0314 00:01:28.198679   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0314 00:01:28.198841   29728 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0314 00:01:28.198997   29728 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0314 00:01:28.280928   29728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:01:28.303852   29728 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0314 00:01:28.303881   29728 api_server.go:166] Checking apiserver status ...
	I0314 00:01:28.303922   29728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:01:28.321906   29728 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0314 00:01:28.335711   29728 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:01:28.335795   29728 ssh_runner.go:195] Run: ls
	I0314 00:01:28.341495   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:33.342218   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:33.342263   29728 retry.go:31] will retry after 274.908012ms: state is "Stopped"
	I0314 00:01:33.617741   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:38.618383   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:38.618434   29728 retry.go:31] will retry after 253.473311ms: state is "Stopped"
	I0314 00:01:38.872787   29728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:01:43.873660   29728 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 00:01:43.873724   29728 status.go:422] ha-504633-m02 apiserver status = Running (err=<nil>)
	I0314 00:01:43.873735   29728 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:01:43.873755   29728 status.go:255] checking status of ha-504633-m04 ...
	I0314 00:01:43.874328   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:43.874405   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:43.889632   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0314 00:01:43.890104   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:43.890621   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:43.890646   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:43.891074   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:43.891306   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0314 00:01:43.893225   29728 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0314 00:01:43.893246   29728 host.go:66] Checking if "ha-504633-m04" exists ...
	I0314 00:01:43.893674   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:43.893721   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:43.909209   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0314 00:01:43.909671   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:43.910253   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:43.910279   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:43.910632   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:43.910945   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0314 00:01:43.914382   29728 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:43.914867   29728 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 01:00:30 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0314 00:01:43.914897   29728 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:43.915033   29728 host.go:66] Checking if "ha-504633-m04" exists ...
	I0314 00:01:43.915436   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:43.915483   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:43.931149   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0314 00:01:43.931581   29728 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:43.932115   29728 main.go:141] libmachine: Using API Version  1
	I0314 00:01:43.932144   29728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:43.932508   29728 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:43.932763   29728 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0314 00:01:43.932962   29728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:01:43.932980   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0314 00:01:43.936186   29728 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:43.936745   29728 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 01:00:30 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0314 00:01:43.936773   29728 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:43.936865   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0314 00:01:43.937006   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0314 00:01:43.937119   29728 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0314 00:01:43.937265   29728 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0314 00:01:44.027398   29728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:01:44.045235   29728 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633
helpers_test.go:239: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633: (3.312372237s)
helpers_test.go:244: <<< TestMutliControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 logs -n 25: (1.867089322s)
helpers_test.go:252: TestMutliControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m04 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp testdata/cp-test.txt                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m03 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-504633 node stop m02 -v=7                                                     | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-504633 node start m02 -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633 -v=7                                                           | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-504633 -v=7                                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-504633 --wait=true -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:56 UTC | 14 Mar 24 00:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633                                                                | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:00 UTC |                     |
	| node    | ha-504633 node delete m03 -v=7                                                   | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:00 UTC | 14 Mar 24 00:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:56:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:56:30.098794   28409 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:56:30.098914   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.098923   28409 out.go:304] Setting ErrFile to fd 2...
	I0313 23:56:30.098928   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.099134   28409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:56:30.099654   28409 out.go:298] Setting JSON to false
	I0313 23:56:30.100577   28409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2333,"bootTime":1710371857,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:56:30.100637   28409 start.go:139] virtualization: kvm guest
	I0313 23:56:30.103023   28409 out.go:177] * [ha-504633] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:56:30.104427   28409 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:56:30.105802   28409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:56:30.104443   28409 notify.go:220] Checking for updates...
	I0313 23:56:30.108628   28409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:56:30.109948   28409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:56:30.111538   28409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:56:30.112884   28409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:56:30.114617   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.114710   28409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:56:30.115158   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.115192   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.130073   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0313 23:56:30.130476   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.131066   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.131089   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.131384   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.131578   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.165833   28409 out.go:177] * Using the kvm2 driver based on existing profile
	I0313 23:56:30.167022   28409 start.go:297] selected driver: kvm2
	I0313 23:56:30.167035   28409 start.go:901] validating driver "kvm2" against &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.167185   28409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:56:30.167473   28409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.167555   28409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:56:30.182038   28409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:56:30.182685   28409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:56:30.182714   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:56:30.182718   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:56:30.182803   28409 start.go:340] cluster config:
	{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.182925   28409 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.185191   28409 out.go:177] * Starting "ha-504633" primary control-plane node in "ha-504633" cluster
	I0313 23:56:30.186941   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:56:30.186991   28409 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:56:30.187001   28409 cache.go:56] Caching tarball of preloaded images
	I0313 23:56:30.187073   28409 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:56:30.187091   28409 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:56:30.187207   28409 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:56:30.187388   28409 start.go:360] acquireMachinesLock for ha-504633: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:56:30.187433   28409 start.go:364] duration metric: took 28.831µs to acquireMachinesLock for "ha-504633"
	I0313 23:56:30.187447   28409 start.go:96] Skipping create...Using existing machine configuration
	I0313 23:56:30.187454   28409 fix.go:54] fixHost starting: 
	I0313 23:56:30.187701   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.187742   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.201690   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0313 23:56:30.202140   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.202610   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.202628   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.203018   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.203197   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.203351   28409 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:56:30.204871   28409 fix.go:112] recreateIfNeeded on ha-504633: state=Running err=<nil>
	W0313 23:56:30.204890   28409 fix.go:138] unexpected machine state, will restart: <nil>
	I0313 23:56:30.206803   28409 out.go:177] * Updating the running kvm2 "ha-504633" VM ...
	I0313 23:56:30.207965   28409 machine.go:94] provisionDockerMachine start ...
	I0313 23:56:30.207984   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.208167   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.210512   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.210996   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.211031   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.211147   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.211321   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211470   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211605   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.211757   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.211986   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.212003   28409 main.go:141] libmachine: About to run SSH command:
	hostname
	I0313 23:56:30.328432   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.328460   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328737   28409 buildroot.go:166] provisioning hostname "ha-504633"
	I0313 23:56:30.328768   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328970   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.331435   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.331897   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.331929   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.332007   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.332203   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332380   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332532   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.332676   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.332881   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.332904   28409 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633 && echo "ha-504633" | sudo tee /etc/hostname
	I0313 23:56:30.464341   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.464368   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.467065   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467483   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.467515   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467715   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.467914   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468065   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468194   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.468333   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.468502   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.468524   28409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:56:30.584360   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:56:30.584396   28409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:56:30.584420   28409 buildroot.go:174] setting up certificates
	I0313 23:56:30.584430   28409 provision.go:84] configureAuth start
	I0313 23:56:30.584438   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.584756   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:56:30.587336   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587798   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.587826   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587958   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.590133   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590486   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.590511   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590669   28409 provision.go:143] copyHostCerts
	I0313 23:56:30.590701   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590755   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:56:30.590781   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590859   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:56:30.590971   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.590997   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:56:30.591003   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.591041   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:56:30.591114   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591140   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:56:30.591146   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591179   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:56:30.591247   28409 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633 san=[127.0.0.1 192.168.39.31 ha-504633 localhost minikube]
	I0313 23:56:30.693441   28409 provision.go:177] copyRemoteCerts
	I0313 23:56:30.693505   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:56:30.693564   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.696012   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696413   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.696440   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696627   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.696839   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.697011   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.697175   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:56:30.785650   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:56:30.785717   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0313 23:56:30.817216   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:56:30.817299   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:56:30.853125   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:56:30.853195   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:56:30.881900   28409 provision.go:87] duration metric: took 297.459041ms to configureAuth
	I0313 23:56:30.881929   28409 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:56:30.882126   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.882189   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.884828   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885248   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.885279   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885467   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.885658   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885801   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885941   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.886061   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.886259   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.886275   28409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:58:01.808461   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:58:01.808491   28409 machine.go:97] duration metric: took 1m31.600511132s to provisionDockerMachine
	I0313 23:58:01.808508   28409 start.go:293] postStartSetup for "ha-504633" (driver="kvm2")
	I0313 23:58:01.808522   28409 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:58:01.808543   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.808861   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:58:01.808887   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.812149   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812576   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.812605   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812815   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.813014   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.813193   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.813334   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:01.903105   28409 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:58:01.907651   28409 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:58:01.907680   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:58:01.907783   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:58:01.907865   28409 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:58:01.907876   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:58:01.907960   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:58:01.919465   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:01.946408   28409 start.go:296] duration metric: took 137.888217ms for postStartSetup
	I0313 23:58:01.946446   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.946781   28409 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0313 23:58:01.946811   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.949427   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.949914   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.949935   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.950107   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.950318   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.950518   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.950688   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	W0313 23:58:02.037663   28409 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0313 23:58:02.037686   28409 fix.go:56] duration metric: took 1m31.850231206s for fixHost
	I0313 23:58:02.037711   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.040343   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040708   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.040738   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040849   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.041044   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041210   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041348   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.041514   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:58:02.041672   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:58:02.041682   28409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:58:02.155870   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710374282.116750971
	
	I0313 23:58:02.155898   28409 fix.go:216] guest clock: 1710374282.116750971
	I0313 23:58:02.155910   28409 fix.go:229] Guest: 2024-03-13 23:58:02.116750971 +0000 UTC Remote: 2024-03-13 23:58:02.037694094 +0000 UTC m=+91.985482062 (delta=79.056877ms)
	I0313 23:58:02.155974   28409 fix.go:200] guest clock delta is within tolerance: 79.056877ms
	I0313 23:58:02.155983   28409 start.go:83] releasing machines lock for "ha-504633", held for 1m31.968539762s
	I0313 23:58:02.156015   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.156280   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:02.158806   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159205   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.159237   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159370   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160006   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160181   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160247   28409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:58:02.160291   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.160405   28409 ssh_runner.go:195] Run: cat /version.json
	I0313 23:58:02.160429   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.162810   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163073   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163115   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163140   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163246   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163435   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.163505   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163525   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163591   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.163741   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163819   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.163890   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.164013   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.164150   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.244724   28409 ssh_runner.go:195] Run: systemctl --version
	I0313 23:58:02.281104   28409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:58:02.443505   28409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:58:02.454543   28409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:58:02.454609   28409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:58:02.464849   28409 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0313 23:58:02.464876   28409 start.go:494] detecting cgroup driver to use...
	I0313 23:58:02.464929   28409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:58:02.482057   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:58:02.496724   28409 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:58:02.496794   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:58:02.511697   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:58:02.527065   28409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:58:02.681040   28409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:58:02.835362   28409 docker.go:233] disabling docker service ...
	I0313 23:58:02.835438   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:58:02.854015   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:58:02.870563   28409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:58:03.023394   28409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:58:03.174638   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:58:03.190413   28409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:58:03.211721   28409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:58:03.211780   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.222878   28409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:58:03.222942   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.233630   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.244322   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.255468   28409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:58:03.267600   28409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:58:03.277642   28409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:58:03.287571   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:03.439510   28409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:58:03.748831   28409 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:58:03.748906   28409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:58:03.756137   28409 start.go:562] Will wait 60s for crictl version
	I0313 23:58:03.756204   28409 ssh_runner.go:195] Run: which crictl
	I0313 23:58:03.760744   28409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:58:03.805526   28409 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:58:03.805610   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.836970   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.869816   28409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:58:03.871315   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:03.873980   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874401   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:03.874426   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874660   28409 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:58:03.879849   28409 kubeadm.go:877] updating cluster {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:58:03.880030   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:58:03.880092   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.930042   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.930078   28409 crio.go:415] Images already preloaded, skipping extraction
	I0313 23:58:03.930134   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.969471   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.969495   28409 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:58:03.969505   28409 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0313 23:58:03.969619   28409 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:58:03.969719   28409 ssh_runner.go:195] Run: crio config
	I0313 23:58:04.017739   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:58:04.017763   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:58:04.017775   28409 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:58:04.017804   28409 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-504633 NodeName:ha-504633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:58:04.017946   28409 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-504633"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:58:04.017969   28409 kube-vip.go:105] generating kube-vip config ...
	I0313 23:58:04.018032   28409 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:58:04.018085   28409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:58:04.028452   28409 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:58:04.028565   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0313 23:58:04.038875   28409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0313 23:58:04.057207   28409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:58:04.076202   28409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0313 23:58:04.094416   28409 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:58:04.112686   28409 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:58:04.118169   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:04.271186   28409 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:58:04.288068   28409 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.31
	I0313 23:58:04.288091   28409 certs.go:194] generating shared ca certs ...
	I0313 23:58:04.288105   28409 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.288255   28409 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:58:04.288306   28409 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:58:04.288320   28409 certs.go:256] generating profile certs ...
	I0313 23:58:04.288406   28409 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:58:04.288441   28409 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10
	I0313 23:58:04.288463   28409 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.156 192.168.39.254]
	I0313 23:58:04.453092   28409 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 ...
	I0313 23:58:04.453124   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10: {Name:mk7f4dfb8ffb67726421360a0ca328ea06182ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453293   28409 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 ...
	I0313 23:58:04.453304   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10: {Name:mkbf58ff48cd95f35e326039dbd8db4c6d576092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453372   28409 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:58:04.453516   28409 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:58:04.453663   28409 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:58:04.453679   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:58:04.453691   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:58:04.453702   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:58:04.453719   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:58:04.453730   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:58:04.453740   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:58:04.453749   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:58:04.453760   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:58:04.453819   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:58:04.453846   28409 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:58:04.453853   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:58:04.453871   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:58:04.453894   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:58:04.453914   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:58:04.453947   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:04.453974   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.453986   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.453998   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.454536   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:58:04.484150   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:58:04.511621   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:58:04.537753   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:58:04.563144   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0313 23:58:04.589432   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0313 23:58:04.614262   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:58:04.640703   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:58:04.666472   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:58:04.694051   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:58:04.720108   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:58:04.745795   28409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:58:04.764781   28409 ssh_runner.go:195] Run: openssl version
	I0313 23:58:04.771274   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:58:04.782595   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787475   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787534   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.793412   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:58:04.803571   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:58:04.815663   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820695   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820752   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.827074   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:58:04.837326   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:58:04.849315   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854566   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854629   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.860986   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:58:04.871499   28409 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:58:04.877769   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0313 23:58:04.884080   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0313 23:58:04.890339   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0313 23:58:04.896291   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0313 23:58:04.902256   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0313 23:58:04.908159   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0313 23:58:04.914108   28409 kubeadm.go:391] StartCluster: {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:58:04.914211   28409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:58:04.914255   28409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:58:05.005477   28409 cri.go:89] found id: "156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	I0313 23:58:05.005504   28409 cri.go:89] found id: "997c2a0595975aac0fa1f4e2f4ed2b071768dbbe122a24a9ace7bcddac59a574"
	I0313 23:58:05.005508   28409 cri.go:89] found id: "705a44943e5ae9684327019d5cba671d9e6fc4baa380fc53f9177b6231975ffb"
	I0313 23:58:05.005511   28409 cri.go:89] found id: "d6dc521bb48cc0b39badfba80b2def42ad744f06beeca9bacdced9693d0c4531"
	I0313 23:58:05.005514   28409 cri.go:89] found id: "b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	I0313 23:58:05.005517   28409 cri.go:89] found id: "aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a"
	I0313 23:58:05.005519   28409 cri.go:89] found id: "91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025"
	I0313 23:58:05.005521   28409 cri.go:89] found id: "cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d"
	I0313 23:58:05.005524   28409 cri.go:89] found id: "b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c"
	I0313 23:58:05.005528   28409 cri.go:89] found id: "ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a"
	I0313 23:58:05.005531   28409 cri.go:89] found id: "ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714"
	I0313 23:58:05.005535   28409 cri.go:89] found id: "03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33"
	I0313 23:58:05.005538   28409 cri.go:89] found id: "f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1"
	I0313 23:58:05.005540   28409 cri.go:89] found id: "581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9"
	I0313 23:58:05.005553   28409 cri.go:89] found id: ""
	I0313 23:58:05.005637   28409 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.746457347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374507746427770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc18e06-fdc7-4b8b-8d2e-22e4f08eaf74 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.749880736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7954f08-7494-4eea-a56d-dfff49e1f8be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.750018196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7954f08-7494-4eea-a56d-dfff49e1f8be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.750659192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab5
86b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094d
e44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb
,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710373509712425
832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7954f08-7494-4eea-a56d-dfff49e1f8be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.797451461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2293f89-a345-430f-bec4-ec341f42e497 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.797526470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2293f89-a345-430f-bec4-ec341f42e497 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.799211362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5b50d94-0d64-4aa9-8a5c-c6affd0aba60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.799962322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374507799937204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5b50d94-0d64-4aa9-8a5c-c6affd0aba60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.800535524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4d9efe2-56bf-4d5f-bbf0-b0659f35b3c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.800587898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4d9efe2-56bf-4d5f-bbf0-b0659f35b3c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.801318018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab5
86b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094d
e44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb
,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710373509712425
832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4d9efe2-56bf-4d5f-bbf0-b0659f35b3c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.855349542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70af749e-1354-4cdc-8c1a-eed2f7ae0b23 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.855420340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70af749e-1354-4cdc-8c1a-eed2f7ae0b23 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.856673315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0eeb7b9b-1ecc-4d70-a4f4-311944068427 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.857409398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374507857383911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eeb7b9b-1ecc-4d70-a4f4-311944068427 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.857933506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44946665-a5d8-40c1-b12c-87288d5c1b1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.858042140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44946665-a5d8-40c1-b12c-87288d5c1b1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.858623289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab5
86b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094d
e44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb
,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710373509712425
832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44946665-a5d8-40c1-b12c-87288d5c1b1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.906887368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96be3306-1e5c-4b7d-b022-9d2930fecaa7 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.907028629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96be3306-1e5c-4b7d-b022-9d2930fecaa7 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.908297352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb176553-c069-450d-82be-34214576a0e1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.909122786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374507909088842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb176553-c069-450d-82be-34214576a0e1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.910579928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc80d94c-5b92-4b3a-aee0-e962e7c3bdf8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.910663740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc80d94c-5b92-4b3a-aee0-e962e7c3bdf8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:01:47 ha-504633 crio[4294]: time="2024-03-14 00:01:47.911317093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab5
86b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094d
e44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb
,PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b,PodSandboxId:6664331d2d846b27a8a6f51f5fdecfe4fa209d1c2238dea33cbbaab8cc532f02,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374186787396189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710373509712425
832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc80d94c-5b92-4b3a-aee0-e962e7c3bdf8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad22c90519039       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 seconds ago       Running             kube-vip                  9                   408abe06ec2bd       kube-vip-ha-504633
	32eccfe2db8bf       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago       Running             kindnet-cni               3                   0ae4db3977043       kindnet-8kvnb
	2ad93782f06ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   24d8f48eddc11       storage-provisioner
	6c12af0f98a84       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago       Running             kube-controller-manager   2                   7114523c0a886       kube-controller-manager-ha-504633
	0bb4395e019a7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago       Running             kube-apiserver            3                   3a02e247a65fa       kube-apiserver-ha-504633
	d5b6800024430       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1833af16e7cfa       busybox-5b5d89c9d6-dx92g
	365fcf57ea467       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   88f517fd35061       kube-proxy-j56zl
	a733ab586b563       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Exited              kindnet-cni               2                   0ae4db3977043       kindnet-8kvnb
	be3d6f776a6b2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Exited              kube-apiserver            2                   3a02e247a65fa       kube-apiserver-ha-504633
	a6ed23280f4a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   d374e5b744b40       coredns-5dd5756b68-dbkfv
	28e15c659f106       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   adde121b4482d       etcd-ha-504633
	597de64e318a0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   64b3632a81b1a       kube-scheduler-ha-504633
	e53161751ea00       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Exited              kube-controller-manager   1                   7114523c0a886       kube-controller-manager-ha-504633
	b964950d4816e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       3                   24d8f48eddc11       storage-provisioner
	a32ba91e1ce55       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   2e1ee02dfee79       coredns-5dd5756b68-hh2kw
	156780ad31a1b       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Exited              kube-vip                  8                   6664331d2d846       kube-vip-ha-504633
	3e670be31d057       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   44694d6d0ddb1       busybox-5b5d89c9d6-dx92g
	91c5fdb6071ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   ac06f7523df34       coredns-5dd5756b68-dbkfv
	cea68e46e7574       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   99eec3703a3ac       coredns-5dd5756b68-hh2kw
	ce0dc1e514cfe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   508491d3a970a       kube-proxy-j56zl
	ec04eb9f36ad1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Exited              etcd                      0                   2e892e8826932       etcd-ha-504633
	03595624eed74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Exited              kube-scheduler            0                   e5651d5d4cdf1       kube-scheduler-ha-504633
	
	
	==> coredns [91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025] <==
	[INFO] 10.244.2.2:45263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00157594s
	[INFO] 10.244.2.2:56184 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095082s
	[INFO] 10.244.2.2:38062 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145314s
	[INFO] 10.244.2.2:47535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099682s
	[INFO] 10.244.1.2:38146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248518s
	[INFO] 10.244.1.2:54521 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160289s
	[INFO] 10.244.1.2:34985 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396473s
	[INFO] 10.244.1.2:37504 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127175s
	[INFO] 10.244.1.2:47786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089644s
	[INFO] 10.244.0.4:42865 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167315s
	[INFO] 10.244.2.2:37374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167385s
	[INFO] 10.244.2.2:33251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009522s
	[INFO] 10.244.1.2:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158704s
	[INFO] 10.244.1.2:36398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143215s
	[INFO] 10.244.1.2:60528 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012073s
	[INFO] 10.244.1.2:45057 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013653s
	[INFO] 10.244.0.4:55605 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153423s
	[INFO] 10.244.1.2:37595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218212s
	[INFO] 10.244.1.2:45054 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155156s
	[INFO] 10.244.1.2:45734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159775s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46251 - 3158 "HINFO IN 4020314174239755005.6788368900148723181. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006083937s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39174 - 27518 "HINFO IN 5234567318487603077.6782029109910001331. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009507212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d] <==
	[INFO] 10.244.0.4:36734 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153111s
	[INFO] 10.244.0.4:36918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002576888s
	[INFO] 10.244.2.2:52506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216481s
	[INFO] 10.244.2.2:41181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142291s
	[INFO] 10.244.1.2:41560 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185807s
	[INFO] 10.244.1.2:34843 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104567s
	[INFO] 10.244.1.2:36490 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226318s
	[INFO] 10.244.0.4:60091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107953s
	[INFO] 10.244.0.4:37327 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151724s
	[INFO] 10.244.0.4:35399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043972s
	[INFO] 10.244.2.2:59809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090745s
	[INFO] 10.244.2.2:40239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069623s
	[INFO] 10.244.0.4:36867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127937s
	[INFO] 10.244.0.4:35854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195121s
	[INFO] 10.244.0.4:56742 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109765s
	[INFO] 10.244.2.2:33696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132875s
	[INFO] 10.244.2.2:51474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149174s
	[INFO] 10.244.2.2:58642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010185s
	[INFO] 10.244.2.2:58203 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089769s
	[INFO] 10.244.1.2:54587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118471s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-504633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:01:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:59:00 +0000   Wed, 13 Mar 2024 23:45:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    ha-504633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13fd8f4b90794ddf8d3d6bdb9051c529
	  System UUID:                13fd8f4b-9079-4ddf-8d3d-6bdb9051c529
	  Boot ID:                    83daf814-565c-4717-8930-43f7c53558eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dx92g             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-dbkfv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-hh2kw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-504633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-8kvnb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-504633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-504633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-j56zl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-504633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-504633                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m55s              kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-504633 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Warning  ContainerGCFailed        4m32s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m48s              node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           2m42s              node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	
	
	Name:               ha-504633-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:47:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-504633-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6ba1a02ba14580ac16771f2b426854
	  System UUID:                5f6ba1a0-2ba1-4580-ac16-771f2b426854
	  Boot ID:                    213c5b73-5c4f-4560-89e6-87c5c4535369
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zfjjt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-504633-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-f4pz8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-504633-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-504633-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-4s9t5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-504633-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-504633-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 2m27s                  kube-proxy       
	  Normal  RegisteredNode           14m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-504633-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  3m22s (x8 over 3m22s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m22s (x8 over 3m22s)  kubelet          Node ha-504633-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x7 over 3m22s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           2m42s                  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	
	
	Name:               ha-504633-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:00:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-504633-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d985b67edcea4528bf49bb9fe5eeb65e
	  System UUID:                d985b67e-dcea-4528-bf49-bb9fe5eeb65e
	  Boot ID:                    9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tcqdr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-dn6gl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-7hr7b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 69s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x5 over 11m)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x5 over 11m)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x5 over 11m)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-504633-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m48s              node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           2m42s              node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeNotReady             2m8s               node-controller  Node ha-504633-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           96s                node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  73s (x3 over 73s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x3 over 73s)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x3 over 73s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 73s (x2 over 73s)  kubelet          Node ha-504633-m04 has been rebooted, boot id: 9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Normal   NodeReady                73s (x2 over 73s)  kubelet          Node ha-504633-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.715783] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.171003] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142829] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.235386] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Mar13 23:45] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.057845] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.706645] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.862236] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.155181] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.379152] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[ +12.986535] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.322868] kauditd_printk_skb: 43 callbacks suppressed
	[Mar13 23:46] kauditd_printk_skb: 27 callbacks suppressed
	[Mar13 23:58] systemd-fstab-generator[4216]: Ignoring "noauto" option for root device
	[  +0.153832] systemd-fstab-generator[4228]: Ignoring "noauto" option for root device
	[  +0.187781] systemd-fstab-generator[4242]: Ignoring "noauto" option for root device
	[  +0.149225] systemd-fstab-generator[4254]: Ignoring "noauto" option for root device
	[  +0.268609] systemd-fstab-generator[4278]: Ignoring "noauto" option for root device
	[  +0.830313] systemd-fstab-generator[4380]: Ignoring "noauto" option for root device
	[  +5.053578] kauditd_printk_skb: 132 callbacks suppressed
	[  +7.758209] kauditd_printk_skb: 80 callbacks suppressed
	[ +34.892345] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.328226] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51] <==
	{"level":"info","ts":"2024-03-13T23:59:53.825578Z","caller":"traceutil/trace.go:171","msg":"trace[269743063] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"165.628751ms","start":"2024-03-13T23:59:53.659924Z","end":"2024-03-13T23:59:53.825552Z","steps":["trace[269743063] 'process raft request'  (duration: 165.485531ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:59:54.1971Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-13T23:59:54.197206Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.197253Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.214156Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.218035Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-13T23:59:54.218084Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.220119Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.779255Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.156:49200","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-14T00:00:49.826375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c25b0656f1ce3d71 switched to configuration voters=(14004794436732468593 17975950259062721749)"}
	{"level":"info","ts":"2024-03-14T00:00:49.826732Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"62f0e98a58b5dbcf","local-member-id":"c25b0656f1ce3d71","removed-remote-peer-id":"8a6ebbe0b7bc25b1","removed-remote-peer-urls":["https://192.168.39.156:2380"]}
	{"level":"info","ts":"2024-03-14T00:00:49.826883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.82719Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.827246Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.828215Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.828302Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.828787Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.82927Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","error":"context canceled"}
	{"level":"warn","ts":"2024-03-14T00:00:49.829587Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8a6ebbe0b7bc25b1","error":"failed to read 8a6ebbe0b7bc25b1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-14T00:00:49.829634Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.829868Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","error":"context canceled"}
	{"level":"info","ts":"2024-03-14T00:00:49.82995Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.83027Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.83038Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c25b0656f1ce3d71","removed-remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.851219Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c25b0656f1ce3d71","remote-peer-id-stream-handler":"c25b0656f1ce3d71","remote-peer-id-from":"8a6ebbe0b7bc25b1"}
	
	
	==> etcd [ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714] <==
	{"level":"info","ts":"2024-03-13T23:56:31.030267Z","caller":"traceutil/trace.go:171","msg":"trace[82422090] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"146.582127ms","start":"2024-03-13T23:56:30.883677Z","end":"2024-03-13T23:56:31.03026Z","steps":["trace[82422090] 'agreement among raft nodes before linearized reading'  (duration: 133.285904ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.030362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.824463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-13T23:56:31.030376Z","caller":"traceutil/trace.go:171","msg":"trace[30097541] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"117.845034ms","start":"2024-03-13T23:56:30.912526Z","end":"2024-03-13T23:56:31.030371Z","steps":["trace[30097541] 'agreement among raft nodes before linearized reading'  (duration: 117.823726ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.046603Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-13T23:56:31.046656Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-13T23:56:31.046727Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c25b0656f1ce3d71","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-13T23:56:31.046878Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.04692Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047134Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047383Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047473Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047537Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047579Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047588Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047598Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.04764Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047766Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047855Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.051496Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051611Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051643Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-504633","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.31:2380"],"advertise-client-urls":["https://192.168.39.31:2379"]}
	
	
	==> kernel <==
	 00:01:48 up 17 min,  0 users,  load average: 0.21, 0.43, 0.35
	Linux ha-504633 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48] <==
	I0314 00:01:06.039108       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:01:16.056431       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:01:16.056477       1 main.go:227] handling current node
	I0314 00:01:16.056489       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:01:16.056495       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:01:16.056617       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:01:16.056646       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:01:26.071714       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:01:26.071770       1 main.go:227] handling current node
	I0314 00:01:26.071780       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:01:26.071787       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:01:26.071901       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:01:26.071906       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:01:36.088294       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:01:36.088344       1 main.go:227] handling current node
	I0314 00:01:36.088365       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:01:36.088375       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:01:36.088489       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:01:36.088514       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:01:46.098958       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:01:46.099088       1 main.go:227] handling current node
	I0314 00:01:46.099103       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:01:46.099111       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:01:46.099258       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:01:46.099297       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f] <==
	I0313 23:58:10.556238       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0313 23:58:10.558062       1 main.go:107] hostIP = 192.168.39.31
	podIP = 192.168.39.31
	I0313 23:58:10.558267       1 main.go:116] setting mtu 1500 for CNI 
	I0313 23:58:10.561039       1 main.go:146] kindnetd IP family: "ipv4"
	I0313 23:58:10.561117       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0313 23:58:12.048699       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:15.120735       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:26.122664       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0313 23:58:30.480524       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:33.552553       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0] <==
	I0313 23:58:54.080722       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0313 23:58:54.080852       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0313 23:58:54.092228       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0313 23:58:54.092295       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0313 23:58:54.092382       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:54.092558       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:54.154312       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0313 23:58:54.165829       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0313 23:58:54.171516       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0313 23:58:54.171529       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0313 23:58:54.172185       1 shared_informer.go:318] Caches are synced for configmaps
	I0313 23:58:54.175389       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0313 23:58:54.175565       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0313 23:58:54.179153       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0313 23:58:54.188452       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.47]
	I0313 23:58:54.190384       1 controller.go:624] quota admission added evaluator for: endpoints
	I0313 23:58:54.192454       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0313 23:58:54.192514       1 aggregator.go:166] initial CRD sync complete...
	I0313 23:58:54.192536       1 autoregister_controller.go:141] Starting autoregister controller
	I0313 23:58:54.192544       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0313 23:58:54.192552       1 cache.go:39] Caches are synced for autoregister controller
	I0313 23:58:54.200231       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0313 23:58:54.207778       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0313 23:58:55.095788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0313 23:58:55.628745       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.31 192.168.39.47]
	
	
	==> kube-apiserver [be3d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25] <==
	I0313 23:58:10.587833       1 options.go:220] external host was not specified, using 192.168.39.31
	I0313 23:58:10.589141       1 server.go:148] Version: v1.28.4
	I0313 23:58:10.589190       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:10.960512       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0313 23:58:10.972325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0313 23:58:10.972408       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0313 23:58:10.972706       1 instance.go:298] Using reconciler: lease
	W0313 23:58:30.947173       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0313 23:58:30.952347       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0313 23:58:30.973636       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a] <==
	I0314 00:00:35.951443       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0314 00:00:46.450850       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-tcqdr"
	I0314 00:00:46.477237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.692633ms"
	I0314 00:00:46.562624       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-n2l2v"
	I0314 00:00:46.581679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="104.368933ms"
	I0314 00:00:46.633368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.52126ms"
	I0314 00:00:46.634223       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-sqk5k"
	I0314 00:00:46.676457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.969905ms"
	I0314 00:00:46.676803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.804µs"
	I0314 00:00:50.285091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.359559ms"
	I0314 00:00:50.285344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.947µs"
	I0314 00:01:11.423295       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0314 00:01:11.587285       1 event.go:307] "Event occurred" object="ha-504633-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-504633-m03 event: Removing Node ha-504633-m03 from Controller"
	E0314 00:01:26.523396       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523522       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523554       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523578       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523669       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523698       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524759       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524809       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524818       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524824       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524830       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524836       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	
	
	==> kube-controller-manager [e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5] <==
	I0313 23:58:11.449358       1 serving.go:348] Generated self-signed cert in-memory
	I0313 23:58:11.925415       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0313 23:58:11.925496       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:11.927114       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:11.927301       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:11.928174       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0313 23:58:11.928326       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0313 23:58:31.981043       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.31:8443/healthz\": dial tcp 192.168.39.31:8443: connect: connection refused"
	
	
	==> kube-proxy [365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791] <==
	I0313 23:58:11.279414       1 server_others.go:69] "Using iptables proxy"
	E0313 23:58:12.945927       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:16.017290       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:19.090373       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:25.232466       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:34.449562       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:52.880584       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	I0313 23:58:52.883107       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0313 23:58:52.945876       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:58:52.945937       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:58:52.950525       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:58:52.951121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:58:52.952171       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:58:52.952216       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:52.957526       1 config.go:188] "Starting service config controller"
	I0313 23:58:52.957594       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:58:52.957651       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:58:52.957659       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:58:52.958592       1 config.go:315] "Starting node config controller"
	I0313 23:58:52.958637       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:58:54.958592       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0313 23:58:54.958700       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:58:54.958712       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a] <==
	I0313 23:45:29.711578       1 server_others.go:69] "Using iptables proxy"
	I0313 23:45:29.730452       1 node.go:141] Successfully retrieved node IP: 192.168.39.31
	I0313 23:45:29.778135       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:45:29.778173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:45:29.781710       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:45:29.782511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:45:29.782796       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:45:29.782835       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:45:29.784428       1 config.go:188] "Starting service config controller"
	I0313 23:45:29.785222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:45:29.785343       1 config.go:315] "Starting node config controller"
	I0313 23:45:29.785372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:45:29.785796       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:45:29.785829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:45:29.885734       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:45:29.885761       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:45:29.886938       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33] <==
	W0313 23:56:27.311602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:27.311659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:27.533068       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0313 23:56:27.533165       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0313 23:56:27.604660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:56:27.604838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:56:27.754398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0313 23:56:27.754565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0313 23:56:27.772480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0313 23:56:27.772646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0313 23:56:27.791220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0313 23:56:27.791266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0313 23:56:27.793169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0313 23:56:27.793208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0313 23:56:28.027210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.027234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.092424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.092523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.939272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0313 23:56:28.939349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0313 23:56:29.169752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0313 23:56:29.169860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0313 23:56:31.002106       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0313 23:56:31.002261       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0313 23:56:31.002463       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba] <==
	W0313 23:58:47.947622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:47.947699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.337094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.337168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.509652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.509716       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.904352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.904411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.047777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.047939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.433391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.433515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:50.731428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:50.731551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.059885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.060023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.726045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.726148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.991331       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.991397       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	I0313 23:59:07.387507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 00:00:46.490419       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-tcqdr\": pod busybox-5b5d89c9d6-tcqdr is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-tcqdr" node="ha-504633-m04"
	E0314 00:00:46.493621       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod eb2cd887-fa57-4342-b3ff-90cc3acd8c6e(default/busybox-5b5d89c9d6-tcqdr) wasn't assumed so cannot be forgotten"
	E0314 00:00:46.493889       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-tcqdr\": pod busybox-5b5d89c9d6-tcqdr is already assigned to node \"ha-504633-m04\"" pod="default/busybox-5b5d89c9d6-tcqdr"
	I0314 00:00:46.493961       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-tcqdr" node="ha-504633-m04"
	
	
	==> kubelet <==
	Mar 14 00:00:16 ha-504633 kubelet[1439]: E0314 00:00:16.830917    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:00:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:00:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:00:19 ha-504633 kubelet[1439]: I0314 00:00:19.776622    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:19 ha-504633 kubelet[1439]: E0314 00:00:19.777499    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:30 ha-504633 kubelet[1439]: I0314 00:00:30.777532    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:30 ha-504633 kubelet[1439]: E0314 00:00:30.778147    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:42 ha-504633 kubelet[1439]: I0314 00:00:42.776943    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:42 ha-504633 kubelet[1439]: E0314 00:00:42.778456    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:00:54 ha-504633 kubelet[1439]: I0314 00:00:54.777130    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:00:54 ha-504633 kubelet[1439]: E0314 00:00:54.777613    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:01:05 ha-504633 kubelet[1439]: I0314 00:01:05.777394    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:01:05 ha-504633 kubelet[1439]: E0314 00:01:05.778419    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:01:16 ha-504633 kubelet[1439]: E0314 00:01:16.828220    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:01:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:01:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:01:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:01:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:01:17 ha-504633 kubelet[1439]: I0314 00:01:17.776645    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:01:17 ha-504633 kubelet[1439]: E0314 00:01:17.777165    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:01:29 ha-504633 kubelet[1439]: I0314 00:01:29.777323    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	Mar 14 00:01:29 ha-504633 kubelet[1439]: E0314 00:01:29.778070    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:01:43 ha-504633 kubelet[1439]: I0314 00:01:43.777094    1439 scope.go:117] "RemoveContainer" containerID="156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:01:47.443306   29904 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-504633 -n ha-504633
helpers_test.go:261: (dbg) Run:  kubectl --context ha-504633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (63.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (142.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 stop -v=7 --alsologtostderr
E0314 00:03:36.336091   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 stop -v=7 --alsologtostderr: exit status 82 (2m0.501737936s)

                                                
                                                
-- stdout --
	* Stopping node "ha-504633-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:01:50.061969   30013 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:01:50.062119   30013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:01:50.062131   30013 out.go:304] Setting ErrFile to fd 2...
	I0314 00:01:50.062138   30013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:01:50.062348   30013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:01:50.062595   30013 out.go:298] Setting JSON to false
	I0314 00:01:50.062676   30013 mustload.go:65] Loading cluster: ha-504633
	I0314 00:01:50.063010   30013 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:01:50.063098   30013 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0314 00:01:50.063270   30013 mustload.go:65] Loading cluster: ha-504633
	I0314 00:01:50.063395   30013 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:01:50.063416   30013 stop.go:39] StopHost: ha-504633-m04
	I0314 00:01:50.063766   30013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:01:50.063809   30013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:01:50.079132   30013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0314 00:01:50.079586   30013 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:01:50.080165   30013 main.go:141] libmachine: Using API Version  1
	I0314 00:01:50.080188   30013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:01:50.080499   30013 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:01:50.082981   30013 out.go:177] * Stopping node "ha-504633-m04"  ...
	I0314 00:01:50.084330   30013 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 00:01:50.084356   30013 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0314 00:01:50.084609   30013 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 00:01:50.084639   30013 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0314 00:01:50.087465   30013 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:50.087877   30013 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 01:00:30 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0314 00:01:50.087913   30013 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:01:50.087979   30013 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0314 00:01:50.088167   30013 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0314 00:01:50.088319   30013 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0314 00:01:50.088459   30013 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	I0314 00:01:50.173662   30013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 00:01:50.227358   30013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 00:01:50.281359   30013 main.go:141] libmachine: Stopping "ha-504633-m04"...
	I0314 00:01:50.281389   30013 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0314 00:01:50.283078   30013 main.go:141] libmachine: (ha-504633-m04) Calling .Stop
	I0314 00:01:50.286695   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 0/120
	I0314 00:01:51.288251   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 1/120
	I0314 00:01:52.290373   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 2/120
	I0314 00:01:53.291750   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 3/120
	I0314 00:01:54.293318   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 4/120
	I0314 00:01:55.295241   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 5/120
	I0314 00:01:56.296425   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 6/120
	I0314 00:01:57.298294   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 7/120
	I0314 00:01:58.299915   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 8/120
	I0314 00:01:59.301186   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 9/120
	I0314 00:02:00.303497   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 10/120
	I0314 00:02:01.304886   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 11/120
	I0314 00:02:02.306306   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 12/120
	I0314 00:02:03.307711   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 13/120
	I0314 00:02:04.309331   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 14/120
	I0314 00:02:05.311199   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 15/120
	I0314 00:02:06.313118   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 16/120
	I0314 00:02:07.314409   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 17/120
	I0314 00:02:08.316014   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 18/120
	I0314 00:02:09.318057   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 19/120
	I0314 00:02:10.320293   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 20/120
	I0314 00:02:11.322101   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 21/120
	I0314 00:02:12.323606   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 22/120
	I0314 00:02:13.325276   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 23/120
	I0314 00:02:14.327459   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 24/120
	I0314 00:02:15.329640   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 25/120
	I0314 00:02:16.331350   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 26/120
	I0314 00:02:17.332750   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 27/120
	I0314 00:02:18.334412   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 28/120
	I0314 00:02:19.336118   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 29/120
	I0314 00:02:20.338823   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 30/120
	I0314 00:02:21.340845   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 31/120
	I0314 00:02:22.342200   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 32/120
	I0314 00:02:23.343760   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 33/120
	I0314 00:02:24.345478   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 34/120
	I0314 00:02:25.347506   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 35/120
	I0314 00:02:26.349423   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 36/120
	I0314 00:02:27.351023   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 37/120
	I0314 00:02:28.353259   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 38/120
	I0314 00:02:29.354574   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 39/120
	I0314 00:02:30.357075   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 40/120
	I0314 00:02:31.358492   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 41/120
	I0314 00:02:32.360687   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 42/120
	I0314 00:02:33.362359   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 43/120
	I0314 00:02:34.364643   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 44/120
	I0314 00:02:35.366887   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 45/120
	I0314 00:02:36.368353   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 46/120
	I0314 00:02:37.369795   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 47/120
	I0314 00:02:38.371282   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 48/120
	I0314 00:02:39.372841   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 49/120
	I0314 00:02:40.375142   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 50/120
	I0314 00:02:41.377567   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 51/120
	I0314 00:02:42.379099   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 52/120
	I0314 00:02:43.381349   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 53/120
	I0314 00:02:44.382683   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 54/120
	I0314 00:02:45.384152   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 55/120
	I0314 00:02:46.385845   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 56/120
	I0314 00:02:47.387230   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 57/120
	I0314 00:02:48.389390   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 58/120
	I0314 00:02:49.390801   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 59/120
	I0314 00:02:50.393317   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 60/120
	I0314 00:02:51.395054   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 61/120
	I0314 00:02:52.397320   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 62/120
	I0314 00:02:53.399132   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 63/120
	I0314 00:02:54.400541   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 64/120
	I0314 00:02:55.402555   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 65/120
	I0314 00:02:56.404485   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 66/120
	I0314 00:02:57.405873   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 67/120
	I0314 00:02:58.407160   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 68/120
	I0314 00:02:59.408684   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 69/120
	I0314 00:03:00.411131   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 70/120
	I0314 00:03:01.413167   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 71/120
	I0314 00:03:02.414746   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 72/120
	I0314 00:03:03.416032   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 73/120
	I0314 00:03:04.417491   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 74/120
	I0314 00:03:05.419456   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 75/120
	I0314 00:03:06.420973   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 76/120
	I0314 00:03:07.422272   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 77/120
	I0314 00:03:08.423563   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 78/120
	I0314 00:03:09.424900   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 79/120
	I0314 00:03:10.427236   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 80/120
	I0314 00:03:11.429549   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 81/120
	I0314 00:03:12.431117   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 82/120
	I0314 00:03:13.433115   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 83/120
	I0314 00:03:14.434720   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 84/120
	I0314 00:03:15.436273   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 85/120
	I0314 00:03:16.437781   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 86/120
	I0314 00:03:17.439587   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 87/120
	I0314 00:03:18.441274   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 88/120
	I0314 00:03:19.442670   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 89/120
	I0314 00:03:20.445033   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 90/120
	I0314 00:03:21.446647   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 91/120
	I0314 00:03:22.448946   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 92/120
	I0314 00:03:23.450485   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 93/120
	I0314 00:03:24.452421   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 94/120
	I0314 00:03:25.454487   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 95/120
	I0314 00:03:26.455997   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 96/120
	I0314 00:03:27.458723   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 97/120
	I0314 00:03:28.460225   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 98/120
	I0314 00:03:29.461721   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 99/120
	I0314 00:03:30.464111   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 100/120
	I0314 00:03:31.466633   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 101/120
	I0314 00:03:32.467817   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 102/120
	I0314 00:03:33.469209   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 103/120
	I0314 00:03:34.470809   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 104/120
	I0314 00:03:35.472074   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 105/120
	I0314 00:03:36.473417   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 106/120
	I0314 00:03:37.475066   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 107/120
	I0314 00:03:38.477349   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 108/120
	I0314 00:03:39.478748   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 109/120
	I0314 00:03:40.480858   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 110/120
	I0314 00:03:41.482354   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 111/120
	I0314 00:03:42.483798   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 112/120
	I0314 00:03:43.486223   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 113/120
	I0314 00:03:44.488078   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 114/120
	I0314 00:03:45.489933   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 115/120
	I0314 00:03:46.491539   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 116/120
	I0314 00:03:47.493423   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 117/120
	I0314 00:03:48.494837   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 118/120
	I0314 00:03:49.496358   30013 main.go:141] libmachine: (ha-504633-m04) Waiting for machine to stop 119/120
	I0314 00:03:50.497495   30013 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 00:03:50.497571   30013 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 00:03:50.499666   30013 out.go:177] 
	W0314 00:03:50.501316   30013 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 00:03:50.501334   30013 out.go:239] * 
	* 
	W0314 00:03:50.504014   30013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:03:50.506272   30013 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-504633 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr: exit status 3 (19.115319273s)

                                                
                                                
-- stdout --
	ha-504633
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-504633-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:03:50.568797   30307 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:03:50.568928   30307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:03:50.568941   30307 out.go:304] Setting ErrFile to fd 2...
	I0314 00:03:50.568948   30307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:03:50.569460   30307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:03:50.569732   30307 out.go:298] Setting JSON to false
	I0314 00:03:50.569767   30307 mustload.go:65] Loading cluster: ha-504633
	I0314 00:03:50.569867   30307 notify.go:220] Checking for updates...
	I0314 00:03:50.570205   30307 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:03:50.570225   30307 status.go:255] checking status of ha-504633 ...
	I0314 00:03:50.570626   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.570679   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.588841   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0314 00:03:50.589260   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.589831   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.589855   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.590244   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.590417   30307 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0314 00:03:50.592374   30307 status.go:330] ha-504633 host status = "Running" (err=<nil>)
	I0314 00:03:50.592390   30307 host.go:66] Checking if "ha-504633" exists ...
	I0314 00:03:50.592669   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.592728   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.608827   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0314 00:03:50.609216   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.609715   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.609737   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.610049   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.610285   30307 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0314 00:03:50.613189   30307 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:03:50.613665   30307 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0314 00:03:50.613692   30307 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:03:50.613850   30307 host.go:66] Checking if "ha-504633" exists ...
	I0314 00:03:50.614237   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.614283   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.628943   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0314 00:03:50.629328   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.629851   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.629866   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.630236   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.630458   30307 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0314 00:03:50.630694   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:03:50.630733   30307 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0314 00:03:50.633896   30307 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:03:50.634314   30307 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0314 00:03:50.634336   30307 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0314 00:03:50.634482   30307 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0314 00:03:50.634660   30307 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0314 00:03:50.634836   30307 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0314 00:03:50.634999   30307 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0314 00:03:50.724503   30307 ssh_runner.go:195] Run: systemctl --version
	I0314 00:03:50.732268   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:03:50.751547   30307 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0314 00:03:50.751582   30307 api_server.go:166] Checking apiserver status ...
	I0314 00:03:50.751630   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:03:50.769502   30307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5487/cgroup
	W0314 00:03:50.780266   30307 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5487/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:03:50.780314   30307 ssh_runner.go:195] Run: ls
	I0314 00:03:50.785413   30307 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:03:50.790155   30307 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 00:03:50.790174   30307 status.go:422] ha-504633 apiserver status = Running (err=<nil>)
	I0314 00:03:50.790183   30307 status.go:257] ha-504633 status: &{Name:ha-504633 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:03:50.790198   30307 status.go:255] checking status of ha-504633-m02 ...
	I0314 00:03:50.790532   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.790578   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.805677   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
	I0314 00:03:50.806142   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.806674   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.806704   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.807021   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.807340   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetState
	I0314 00:03:50.809116   30307 status.go:330] ha-504633-m02 host status = "Running" (err=<nil>)
	I0314 00:03:50.809131   30307 host.go:66] Checking if "ha-504633-m02" exists ...
	I0314 00:03:50.809397   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.809434   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.823840   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0314 00:03:50.824261   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.824783   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.824811   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.825132   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.825379   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetIP
	I0314 00:03:50.828595   30307 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:03:50.828980   30307 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:58:16 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0314 00:03:50.829003   30307 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:03:50.829213   30307 host.go:66] Checking if "ha-504633-m02" exists ...
	I0314 00:03:50.829622   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.829682   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:50.844610   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0314 00:03:50.845022   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:50.845484   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:50.845504   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:50.845838   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:50.846046   30307 main.go:141] libmachine: (ha-504633-m02) Calling .DriverName
	I0314 00:03:50.846283   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:03:50.846305   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHHostname
	I0314 00:03:50.849086   30307 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:03:50.849594   30307 main.go:141] libmachine: (ha-504633-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:27:e8", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:58:16 +0000 UTC Type:0 Mac:52:54:00:56:27:e8 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-504633-m02 Clientid:01:52:54:00:56:27:e8}
	I0314 00:03:50.849641   30307 main.go:141] libmachine: (ha-504633-m02) DBG | domain ha-504633-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:56:27:e8 in network mk-ha-504633
	I0314 00:03:50.849732   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHPort
	I0314 00:03:50.849901   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHKeyPath
	I0314 00:03:50.850034   30307 main.go:141] libmachine: (ha-504633-m02) Calling .GetSSHUsername
	I0314 00:03:50.850137   30307 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m02/id_rsa Username:docker}
	I0314 00:03:50.932186   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:03:50.952447   30307 kubeconfig.go:125] found "ha-504633" server: "https://192.168.39.254:8443"
	I0314 00:03:50.952474   30307 api_server.go:166] Checking apiserver status ...
	I0314 00:03:50.952531   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:03:50.970044   30307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0314 00:03:50.982601   30307 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:03:50.982651   30307 ssh_runner.go:195] Run: ls
	I0314 00:03:50.987526   30307 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 00:03:50.994492   30307 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 00:03:50.994522   30307 status.go:422] ha-504633-m02 apiserver status = Running (err=<nil>)
	I0314 00:03:50.994533   30307 status.go:257] ha-504633-m02 status: &{Name:ha-504633-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:03:50.994558   30307 status.go:255] checking status of ha-504633-m04 ...
	I0314 00:03:50.994932   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:50.994989   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:51.010619   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0314 00:03:51.011150   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:51.011608   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:51.011666   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:51.012079   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:51.012257   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetState
	I0314 00:03:51.013949   30307 status.go:330] ha-504633-m04 host status = "Running" (err=<nil>)
	I0314 00:03:51.013968   30307 host.go:66] Checking if "ha-504633-m04" exists ...
	I0314 00:03:51.014266   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:51.014309   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:51.029446   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0314 00:03:51.029821   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:51.030354   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:51.030396   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:51.030731   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:51.030935   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetIP
	I0314 00:03:51.034084   30307 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:03:51.034545   30307 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 01:00:30 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0314 00:03:51.034574   30307 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:03:51.034744   30307 host.go:66] Checking if "ha-504633-m04" exists ...
	I0314 00:03:51.035064   30307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:03:51.035109   30307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:03:51.049838   30307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39729
	I0314 00:03:51.050357   30307 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:03:51.050878   30307 main.go:141] libmachine: Using API Version  1
	I0314 00:03:51.050902   30307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:03:51.051257   30307 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:03:51.051516   30307 main.go:141] libmachine: (ha-504633-m04) Calling .DriverName
	I0314 00:03:51.051733   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:03:51.051753   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHHostname
	I0314 00:03:51.054574   30307 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:03:51.055060   30307 main.go:141] libmachine: (ha-504633-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e5:9e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 01:00:30 +0000 UTC Type:0 Mac:52:54:00:14:e5:9e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-504633-m04 Clientid:01:52:54:00:14:e5:9e}
	I0314 00:03:51.055083   30307 main.go:141] libmachine: (ha-504633-m04) DBG | domain ha-504633-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:14:e5:9e in network mk-ha-504633
	I0314 00:03:51.055241   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHPort
	I0314 00:03:51.055410   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHKeyPath
	I0314 00:03:51.055530   30307 main.go:141] libmachine: (ha-504633-m04) Calling .GetSSHUsername
	I0314 00:03:51.055692   30307 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633-m04/id_rsa Username:docker}
	W0314 00:04:09.623063   30307 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.241:22: connect: no route to host
	W0314 00:04:09.623138   30307 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	E0314 00:04:09.623152   30307 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	I0314 00:04:09.623159   30307 status.go:257] ha-504633-m04 status: &{Name:ha-504633-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0314 00:04:09.623173   30307 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-504633 -n ha-504633
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-504633 logs -n 25: (1.77948762s)
helpers_test.go:252: TestMutliControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m04 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp testdata/cp-test.txt                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt                       |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633 sudo cat                                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633.txt                                 |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m02 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt                              | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n                                                                 | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | ha-504633-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-504633 ssh -n ha-504633-m03 sudo cat                                          | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC | 13 Mar 24 23:51 UTC |
	|         | /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-504633 node stop m02 -v=7                                                     | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-504633 node start m02 -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633 -v=7                                                           | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-504633 -v=7                                                                | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-504633 --wait=true -v=7                                                    | ha-504633 | jenkins | v1.32.0 | 13 Mar 24 23:56 UTC | 14 Mar 24 00:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-504633                                                                | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:00 UTC |                     |
	| node    | ha-504633 node delete m03 -v=7                                                   | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:00 UTC | 14 Mar 24 00:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-504633 stop -v=7                                                              | ha-504633 | jenkins | v1.32.0 | 14 Mar 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:56:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:56:30.098794   28409 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:56:30.098914   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.098923   28409 out.go:304] Setting ErrFile to fd 2...
	I0313 23:56:30.098928   28409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:56:30.099134   28409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:56:30.099654   28409 out.go:298] Setting JSON to false
	I0313 23:56:30.100577   28409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2333,"bootTime":1710371857,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:56:30.100637   28409 start.go:139] virtualization: kvm guest
	I0313 23:56:30.103023   28409 out.go:177] * [ha-504633] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:56:30.104427   28409 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:56:30.105802   28409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:56:30.104443   28409 notify.go:220] Checking for updates...
	I0313 23:56:30.108628   28409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:56:30.109948   28409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:56:30.111538   28409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:56:30.112884   28409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:56:30.114617   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.114710   28409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:56:30.115158   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.115192   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.130073   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0313 23:56:30.130476   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.131066   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.131089   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.131384   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.131578   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.165833   28409 out.go:177] * Using the kvm2 driver based on existing profile
	I0313 23:56:30.167022   28409 start.go:297] selected driver: kvm2
	I0313 23:56:30.167035   28409 start.go:901] validating driver "kvm2" against &{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.167185   28409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:56:30.167473   28409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.167555   28409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:56:30.182038   28409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:56:30.182685   28409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:56:30.182714   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:56:30.182718   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:56:30.182803   28409 start.go:340] cluster config:
	{Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:56:30.182925   28409 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:56:30.185191   28409 out.go:177] * Starting "ha-504633" primary control-plane node in "ha-504633" cluster
	I0313 23:56:30.186941   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:56:30.186991   28409 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:56:30.187001   28409 cache.go:56] Caching tarball of preloaded images
	I0313 23:56:30.187073   28409 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0313 23:56:30.187091   28409 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0313 23:56:30.187207   28409 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/config.json ...
	I0313 23:56:30.187388   28409 start.go:360] acquireMachinesLock for ha-504633: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:56:30.187433   28409 start.go:364] duration metric: took 28.831µs to acquireMachinesLock for "ha-504633"
	I0313 23:56:30.187447   28409 start.go:96] Skipping create...Using existing machine configuration
	I0313 23:56:30.187454   28409 fix.go:54] fixHost starting: 
	I0313 23:56:30.187701   28409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:56:30.187742   28409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:56:30.201690   28409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0313 23:56:30.202140   28409 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:56:30.202610   28409 main.go:141] libmachine: Using API Version  1
	I0313 23:56:30.202628   28409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:56:30.203018   28409 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:56:30.203197   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.203351   28409 main.go:141] libmachine: (ha-504633) Calling .GetState
	I0313 23:56:30.204871   28409 fix.go:112] recreateIfNeeded on ha-504633: state=Running err=<nil>
	W0313 23:56:30.204890   28409 fix.go:138] unexpected machine state, will restart: <nil>
	I0313 23:56:30.206803   28409 out.go:177] * Updating the running kvm2 "ha-504633" VM ...
	I0313 23:56:30.207965   28409 machine.go:94] provisionDockerMachine start ...
	I0313 23:56:30.207984   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:56:30.208167   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.210512   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.210996   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.211031   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.211147   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.211321   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211470   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.211605   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.211757   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.211986   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.212003   28409 main.go:141] libmachine: About to run SSH command:
	hostname
	I0313 23:56:30.328432   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.328460   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328737   28409 buildroot.go:166] provisioning hostname "ha-504633"
	I0313 23:56:30.328768   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.328970   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.331435   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.331897   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.331929   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.332007   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.332203   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332380   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.332532   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.332676   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.332881   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.332904   28409 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-504633 && echo "ha-504633" | sudo tee /etc/hostname
	I0313 23:56:30.464341   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-504633
	
	I0313 23:56:30.464368   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.467065   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467483   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.467515   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.467715   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.467914   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468065   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.468194   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.468333   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.468502   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.468524   28409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-504633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-504633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-504633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:56:30.584360   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:56:30.584396   28409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0313 23:56:30.584420   28409 buildroot.go:174] setting up certificates
	I0313 23:56:30.584430   28409 provision.go:84] configureAuth start
	I0313 23:56:30.584438   28409 main.go:141] libmachine: (ha-504633) Calling .GetMachineName
	I0313 23:56:30.584756   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:56:30.587336   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587798   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.587826   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.587958   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.590133   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590486   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.590511   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.590669   28409 provision.go:143] copyHostCerts
	I0313 23:56:30.590701   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590755   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0313 23:56:30.590781   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0313 23:56:30.590859   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0313 23:56:30.590971   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.590997   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0313 23:56:30.591003   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0313 23:56:30.591041   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0313 23:56:30.591114   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591140   28409 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0313 23:56:30.591146   28409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0313 23:56:30.591179   28409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0313 23:56:30.591247   28409 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.ha-504633 san=[127.0.0.1 192.168.39.31 ha-504633 localhost minikube]
	I0313 23:56:30.693441   28409 provision.go:177] copyRemoteCerts
	I0313 23:56:30.693505   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:56:30.693564   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.696012   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696413   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.696440   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.696627   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.696839   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.697011   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.697175   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:56:30.785650   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0313 23:56:30.785717   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0313 23:56:30.817216   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0313 23:56:30.817299   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:56:30.853125   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0313 23:56:30.853195   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0313 23:56:30.881900   28409 provision.go:87] duration metric: took 297.459041ms to configureAuth
	I0313 23:56:30.881929   28409 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:56:30.882126   28409 config.go:182] Loaded profile config "ha-504633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:56:30.882189   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:56:30.884828   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885248   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:56:30.885279   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:56:30.885467   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:56:30.885658   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885801   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:56:30.885941   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:56:30.886061   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:56:30.886259   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:56:30.886275   28409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0313 23:58:01.808461   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0313 23:58:01.808491   28409 machine.go:97] duration metric: took 1m31.600511132s to provisionDockerMachine
	I0313 23:58:01.808508   28409 start.go:293] postStartSetup for "ha-504633" (driver="kvm2")
	I0313 23:58:01.808522   28409 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:58:01.808543   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.808861   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:58:01.808887   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.812149   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812576   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.812605   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.812815   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.813014   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.813193   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.813334   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:01.903105   28409 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:58:01.907651   28409 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:58:01.907680   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0313 23:58:01.907783   28409 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0313 23:58:01.907865   28409 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0313 23:58:01.907876   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0313 23:58:01.907960   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0313 23:58:01.919465   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:01.946408   28409 start.go:296] duration metric: took 137.888217ms for postStartSetup
	I0313 23:58:01.946446   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:01.946781   28409 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0313 23:58:01.946811   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:01.949427   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.949914   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:01.949935   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:01.950107   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:01.950318   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:01.950518   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:01.950688   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	W0313 23:58:02.037663   28409 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0313 23:58:02.037686   28409 fix.go:56] duration metric: took 1m31.850231206s for fixHost
	I0313 23:58:02.037711   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.040343   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040708   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.040738   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.040849   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.041044   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041210   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.041348   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.041514   28409 main.go:141] libmachine: Using SSH client type: native
	I0313 23:58:02.041672   28409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0313 23:58:02.041682   28409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:58:02.155870   28409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710374282.116750971
	
	I0313 23:58:02.155898   28409 fix.go:216] guest clock: 1710374282.116750971
	I0313 23:58:02.155910   28409 fix.go:229] Guest: 2024-03-13 23:58:02.116750971 +0000 UTC Remote: 2024-03-13 23:58:02.037694094 +0000 UTC m=+91.985482062 (delta=79.056877ms)
	I0313 23:58:02.155974   28409 fix.go:200] guest clock delta is within tolerance: 79.056877ms
	I0313 23:58:02.155983   28409 start.go:83] releasing machines lock for "ha-504633", held for 1m31.968539762s
	I0313 23:58:02.156015   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.156280   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:02.158806   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159205   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.159237   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.159370   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160006   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160181   28409 main.go:141] libmachine: (ha-504633) Calling .DriverName
	I0313 23:58:02.160247   28409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:58:02.160291   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.160405   28409 ssh_runner.go:195] Run: cat /version.json
	I0313 23:58:02.160429   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHHostname
	I0313 23:58:02.162810   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163073   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163115   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163140   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163246   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163435   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.163505   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:02.163525   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:02.163591   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.163741   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHPort
	I0313 23:58:02.163819   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.163890   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHKeyPath
	I0313 23:58:02.164013   28409 main.go:141] libmachine: (ha-504633) Calling .GetSSHUsername
	I0313 23:58:02.164150   28409 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/ha-504633/id_rsa Username:docker}
	I0313 23:58:02.244724   28409 ssh_runner.go:195] Run: systemctl --version
	I0313 23:58:02.281104   28409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0313 23:58:02.443505   28409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:58:02.454543   28409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:58:02.454609   28409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:58:02.464849   28409 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0313 23:58:02.464876   28409 start.go:494] detecting cgroup driver to use...
	I0313 23:58:02.464929   28409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0313 23:58:02.482057   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0313 23:58:02.496724   28409 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:58:02.496794   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:58:02.511697   28409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:58:02.527065   28409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:58:02.681040   28409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:58:02.835362   28409 docker.go:233] disabling docker service ...
	I0313 23:58:02.835438   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:58:02.854015   28409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:58:02.870563   28409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:58:03.023394   28409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:58:03.174638   28409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:58:03.190413   28409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:58:03.211721   28409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0313 23:58:03.211780   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.222878   28409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0313 23:58:03.222942   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.233630   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.244322   28409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0313 23:58:03.255468   28409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:58:03.267600   28409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:58:03.277642   28409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:58:03.287571   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:03.439510   28409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0313 23:58:03.748831   28409 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0313 23:58:03.748906   28409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0313 23:58:03.756137   28409 start.go:562] Will wait 60s for crictl version
	I0313 23:58:03.756204   28409 ssh_runner.go:195] Run: which crictl
	I0313 23:58:03.760744   28409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:58:03.805526   28409 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0313 23:58:03.805610   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.836970   28409 ssh_runner.go:195] Run: crio --version
	I0313 23:58:03.869816   28409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0313 23:58:03.871315   28409 main.go:141] libmachine: (ha-504633) Calling .GetIP
	I0313 23:58:03.873980   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874401   28409 main.go:141] libmachine: (ha-504633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:1c:0e", ip: ""} in network mk-ha-504633: {Iface:virbr1 ExpiryTime:2024-03-14 00:44:47 +0000 UTC Type:0 Mac:52:54:00:ad:1c:0e Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-504633 Clientid:01:52:54:00:ad:1c:0e}
	I0313 23:58:03.874426   28409 main.go:141] libmachine: (ha-504633) DBG | domain ha-504633 has defined IP address 192.168.39.31 and MAC address 52:54:00:ad:1c:0e in network mk-ha-504633
	I0313 23:58:03.874660   28409 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:58:03.879849   28409 kubeadm.go:877] updating cluster {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:58:03.880030   28409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:58:03.880092   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.930042   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.930078   28409 crio.go:415] Images already preloaded, skipping extraction
	I0313 23:58:03.930134   28409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:58:03.969471   28409 crio.go:496] all images are preloaded for cri-o runtime.
	I0313 23:58:03.969495   28409 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:58:03.969505   28409 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0313 23:58:03.969619   28409 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-504633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:58:03.969719   28409 ssh_runner.go:195] Run: crio config
	I0313 23:58:04.017739   28409 cni.go:84] Creating CNI manager for ""
	I0313 23:58:04.017763   28409 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0313 23:58:04.017775   28409 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:58:04.017804   28409 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-504633 NodeName:ha-504633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:58:04.017946   28409 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-504633"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:58:04.017969   28409 kube-vip.go:105] generating kube-vip config ...
	I0313 23:58:04.018032   28409 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0313 23:58:04.018085   28409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:58:04.028452   28409 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:58:04.028565   28409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0313 23:58:04.038875   28409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0313 23:58:04.057207   28409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:58:04.076202   28409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0313 23:58:04.094416   28409 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0313 23:58:04.112686   28409 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0313 23:58:04.118169   28409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:58:04.271186   28409 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:58:04.288068   28409 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633 for IP: 192.168.39.31
	I0313 23:58:04.288091   28409 certs.go:194] generating shared ca certs ...
	I0313 23:58:04.288105   28409 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.288255   28409 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0313 23:58:04.288306   28409 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0313 23:58:04.288320   28409 certs.go:256] generating profile certs ...
	I0313 23:58:04.288406   28409 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/client.key
	I0313 23:58:04.288441   28409 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10
	I0313 23:58:04.288463   28409 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31 192.168.39.47 192.168.39.156 192.168.39.254]
	I0313 23:58:04.453092   28409 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 ...
	I0313 23:58:04.453124   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10: {Name:mk7f4dfb8ffb67726421360a0ca328ea06182ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453293   28409 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 ...
	I0313 23:58:04.453304   28409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10: {Name:mkbf58ff48cd95f35e326039dbd8db4c6d576092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:58:04.453372   28409 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt
	I0313 23:58:04.453516   28409 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key.877a2b10 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key
	I0313 23:58:04.453663   28409 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key
	I0313 23:58:04.453679   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0313 23:58:04.453691   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0313 23:58:04.453702   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0313 23:58:04.453719   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0313 23:58:04.453730   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0313 23:58:04.453740   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0313 23:58:04.453749   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0313 23:58:04.453760   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0313 23:58:04.453819   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0313 23:58:04.453846   28409 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0313 23:58:04.453853   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0313 23:58:04.453871   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0313 23:58:04.453894   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:58:04.453914   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0313 23:58:04.453947   28409 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0313 23:58:04.453974   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.453986   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.453998   28409 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.454536   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:58:04.484150   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0313 23:58:04.511621   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:58:04.537753   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0313 23:58:04.563144   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0313 23:58:04.589432   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0313 23:58:04.614262   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:58:04.640703   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/ha-504633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0313 23:58:04.666472   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0313 23:58:04.694051   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0313 23:58:04.720108   28409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:58:04.745795   28409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:58:04.764781   28409 ssh_runner.go:195] Run: openssl version
	I0313 23:58:04.771274   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0313 23:58:04.782595   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787475   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.787534   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0313 23:58:04.793412   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0313 23:58:04.803571   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0313 23:58:04.815663   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820695   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.820752   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0313 23:58:04.827074   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0313 23:58:04.837326   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:58:04.849315   28409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854566   28409 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.854629   28409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:58:04.860986   28409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:58:04.871499   28409 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:58:04.877769   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0313 23:58:04.884080   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0313 23:58:04.890339   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0313 23:58:04.896291   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0313 23:58:04.902256   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0313 23:58:04.908159   28409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0313 23:58:04.914108   28409 kubeadm.go:391] StartCluster: {Name:ha-504633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-504633 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.241 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:58:04.914211   28409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0313 23:58:04.914255   28409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:58:05.005477   28409 cri.go:89] found id: "156780ad31a1bac42ed0c6b1253e86931042b024c17fa00f3f16bcfb829ffc9b"
	I0313 23:58:05.005504   28409 cri.go:89] found id: "997c2a0595975aac0fa1f4e2f4ed2b071768dbbe122a24a9ace7bcddac59a574"
	I0313 23:58:05.005508   28409 cri.go:89] found id: "705a44943e5ae9684327019d5cba671d9e6fc4baa380fc53f9177b6231975ffb"
	I0313 23:58:05.005511   28409 cri.go:89] found id: "d6dc521bb48cc0b39badfba80b2def42ad744f06beeca9bacdced9693d0c4531"
	I0313 23:58:05.005514   28409 cri.go:89] found id: "b8cd8ab250ed1073a4458b0b29e4e27e53e12d66a6679120c9537c32a944efe7"
	I0313 23:58:05.005517   28409 cri.go:89] found id: "aadb470eed29b2f719f5fbb858bf9123995c5f9752f94e4c060b37334f36098a"
	I0313 23:58:05.005519   28409 cri.go:89] found id: "91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025"
	I0313 23:58:05.005521   28409 cri.go:89] found id: "cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d"
	I0313 23:58:05.005524   28409 cri.go:89] found id: "b87585aab2e4e09551759e68536fb211fa1b0caf3e52eaa8a9cc6c2a02018f9c"
	I0313 23:58:05.005528   28409 cri.go:89] found id: "ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a"
	I0313 23:58:05.005531   28409 cri.go:89] found id: "ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714"
	I0313 23:58:05.005535   28409 cri.go:89] found id: "03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33"
	I0313 23:58:05.005538   28409 cri.go:89] found id: "f760286dfea8a62f478148ae4d5d43792cffbe25bf3faa4e1dc3f19d288fa6c1"
	I0313 23:58:05.005540   28409 cri.go:89] found id: "581070edea465f8d145d27a60ffb393b98695bf0829f1e57ab098e13914064c9"
	I0313 23:58:05.005553   28409 cri.go:89] found id: ""
	I0313 23:58:05.005637   28409 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.279428422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374650279403229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3e1f30b-1e7c-4ea4-8fb4-8d101640b8be name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.280086765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda55eb6-9f9c-4400-867c-e3b809eab914 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.280344522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda55eb6-9f9c-4400-867c-e3b809eab914 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.281529746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab58
6b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094de
44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb,
PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda55eb6-9f9c-4400-867c-e3b809eab914 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.334609722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01d72715-158d-4d66-935d-791a82ab31e1 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.334688300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01d72715-158d-4d66-935d-791a82ab31e1 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.336160038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8792f0b1-0265-4742-ba89-3d72cdde5b92 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.336871092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374650336843587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8792f0b1-0265-4742-ba89-3d72cdde5b92 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.337774460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11f63896-1dda-4e2b-a4ad-dfa45d2b2edd name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.337832616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11f63896-1dda-4e2b-a4ad-dfa45d2b2edd name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.338320766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab58
6b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094de
44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb,
PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11f63896-1dda-4e2b-a4ad-dfa45d2b2edd name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.385396048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c16199ee-f446-4233-b7d4-2d1aa34f04e8 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.385495772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c16199ee-f446-4233-b7d4-2d1aa34f04e8 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.386662401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b201da6c-826c-4b37-88c1-ea2a1c337b67 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.387213887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374650387180093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b201da6c-826c-4b37-88c1-ea2a1c337b67 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.387858222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2493c65f-70de-45de-9484-bb625c07b7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.387920891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2493c65f-70de-45de-9484-bb625c07b7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.389156205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab58
6b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094de
44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb,
PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2493c65f-70de-45de-9484-bb625c07b7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.444486489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab896917-b7ef-4033-bcbe-01adcd43e828 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.444572175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab896917-b7ef-4033-bcbe-01adcd43e828 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.446299434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53f7ab11-3ae6-477f-9db4-4e2d4c05e285 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.446748546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710374650446722920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53f7ab11-3ae6-477f-9db4-4e2d4c05e285 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.447654962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75b45711-5e73-4025-ba59-113113b56f63 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.447714646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75b45711-5e73-4025-ba59-113113b56f63 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:04:10 ha-504633 crio[4294]: time="2024-03-14 00:04:10.448243765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec,PodSandboxId:408abe06ec2bdfeb08109f0a100bdc461530fd555b24398ea142ee5bbfca4647,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:9,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710374503795398841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f7ed25c0cb42b2cf61135e6a1c245f,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710374344805858713,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad93782f06ad008a87fef3300e81338c4438423e076a68d6c3ae4af2619b74f,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710374339792026087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710374334790569126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710374331792652786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b6800024430de256acac63a91aee0f3c4048c486b4b6053d6e59210fb898fe,PodSandboxId:1833af16e7cfaf95c1a0781cae51b9cb93c6afd490822c41b42c1317193f9dc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710374323069349310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubernetes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791,PodSandboxId:88f517fd35061841044e028230a19ed5fe98e43f5febc68a588a05080328c658,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710374290395150151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b964950d4816e397c8b058ba0d8f87de8d09cd44fda17dce515d39044c93f420,PodSandboxId:24d8f48eddc11e011a3a5d0c7fa5de7bfceff3732c02e293f177b06a6b50cc80,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710374289594055159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e57f625-8927-418c-bdf2-9022439f858c,},Annotations:map[string]string{io.kubernetes.container.hash: b1c37f69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a733ab58
6b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f,PodSandboxId:0ae4db3977043721e9b4ac5ef09302fede37dcd752a63a325ee8402a6e8c25f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710374289989765088,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8kvnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b356234a-5293-417c-b78f-8d532dfe1bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 48e267e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3d6f776a6b20ee3c1b32374c40385cd3b826094de
44efd90e86b2c4581cb25,PodSandboxId:3a02e247a65fa33bc9ac6dec2e4e7823c6f1840277d2ddde37858ff0fd07af09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710374289880765157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cdbdbd1a1d0aefa499a886ae738c0a,},Annotations:map[string]string{io.kubernetes.container.hash: c16c5abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb,
PodSandboxId:d374e5b744b402d34a336793d0b8b5232953c2e0e7acc19d595a5992ef77d217,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374289820663264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51,PodSandboxId:adde121b4482d5558f959b3bb88f58ebbf6ebb1c6271621ce69e6ca878f5584d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710374289759118794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba,PodSandboxId:64b3632a81b1a0792699205e294b67f1fd4957d0511f594937ead53999b1b69e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710374289723662664,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5,PodSandboxId:7114523c0a8869e1d69548207573e012d3d5d1d092807758d0de93ec0c549383,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710374289629636370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a4476828b7f0f0c95498e085ba5df9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85,PodSandboxId:2e1ee02dfee79aae3748308611205832d71c4ff9aef700d306cf26ac1dbd0e60,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710374285184221956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e670be31d057b5fa703139b89544938a0460984f9df20d57b09ea8088fb68ce,PodSandboxId:44694d6d0ddb18bd1e9664b21b92d0a8c38bd08203e1b0771032443a2c3832ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710373793336215727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dx92g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4da8d7b-2fcc-46b3-a6a3-12f23d16de43,},Annotations:map[string]string{io.kubern
etes.container.hash: 2add1fdd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d,PodSandboxId:99eec3703a3acb2084b14462cabba6f3fbdca463f4efc01f17af5c602ed8e3d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534968581242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hh2kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac81d022-8c47-4f99-8a34-bb4f73ead561,},Annotations:map[string]string{io.kubernetes.container.hash: b68c568b,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025,PodSandboxId:ac06f7523df34aeb6b4da8c16add8e44675a6407383bc12bcca1a4a95c0cd839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710373534992474919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-dbkfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb55bb86-7637-4571-af89-55b34361d46f,},Annotations:map[string]string{io.kubernetes.container.hash: ad230efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a,PodSandboxId:508491d3a970aaa0983c0ae69f3d5fe666f45725ece755e1140bb6735e106cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710373529504369187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j56zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfe6dd8-3bc7-46c3-916c-3aac95b2e6e4,},Annotations:map[string]string{io.kubernetes.container.hash: e00c4483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33,PodSandboxId:e5651d5d4cdf174224158e880f7ca8a484344dc4967fc596396bae7b467d2808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710373509707712229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c67e920ab8fd05e2d7c9a70920aeb5b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714,PodSandboxId:2e892e882693292772ed5c9542ea007ece29af986a5f8aa517c9cf0ced200b3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710373509712425832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-504633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800b1d8694f42b67376c6e23b8dd8603,},Annotations:map[string]string{io.kubernetes.container.hash: 11e02412,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75b45711-5e73-4025-ba59-113113b56f63 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad22c90519039       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago       Exited              kube-vip                  9                   408abe06ec2bd       kube-vip-ha-504633
	32eccfe2db8bf       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               3                   0ae4db3977043       kindnet-8kvnb
	2ad93782f06ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       4                   24d8f48eddc11       storage-provisioner
	6c12af0f98a84       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Running             kube-controller-manager   2                   7114523c0a886       kube-controller-manager-ha-504633
	0bb4395e019a7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Running             kube-apiserver            3                   3a02e247a65fa       kube-apiserver-ha-504633
	d5b6800024430       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   1833af16e7cfa       busybox-5b5d89c9d6-dx92g
	365fcf57ea467       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                1                   88f517fd35061       kube-proxy-j56zl
	a733ab586b563       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Exited              kindnet-cni               2                   0ae4db3977043       kindnet-8kvnb
	be3d6f776a6b2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Exited              kube-apiserver            2                   3a02e247a65fa       kube-apiserver-ha-504633
	a6ed23280f4a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   1                   d374e5b744b40       coredns-5dd5756b68-dbkfv
	28e15c659f106       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      1                   adde121b4482d       etcd-ha-504633
	597de64e318a0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            1                   64b3632a81b1a       kube-scheduler-ha-504633
	e53161751ea00       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Exited              kube-controller-manager   1                   7114523c0a886       kube-controller-manager-ha-504633
	b964950d4816e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       3                   24d8f48eddc11       storage-provisioner
	a32ba91e1ce55       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   1                   2e1ee02dfee79       coredns-5dd5756b68-hh2kw
	3e670be31d057       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   44694d6d0ddb1       busybox-5b5d89c9d6-dx92g
	91c5fdb6071ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      18 minutes ago      Exited              coredns                   0                   ac06f7523df34       coredns-5dd5756b68-dbkfv
	cea68e46e7574       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      18 minutes ago      Exited              coredns                   0                   99eec3703a3ac       coredns-5dd5756b68-hh2kw
	ce0dc1e514cfe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      18 minutes ago      Exited              kube-proxy                0                   508491d3a970a       kube-proxy-j56zl
	ec04eb9f36ad1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Exited              etcd                      0                   2e892e8826932       etcd-ha-504633
	03595624eed74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      19 minutes ago      Exited              kube-scheduler            0                   e5651d5d4cdf1       kube-scheduler-ha-504633
	
	
	==> coredns [91c5fdb6071edac89263d27bde381c0bdb4b86b8069dc052b832779c397a2025] <==
	[INFO] 10.244.2.2:45263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00157594s
	[INFO] 10.244.2.2:56184 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095082s
	[INFO] 10.244.2.2:38062 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145314s
	[INFO] 10.244.2.2:47535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099682s
	[INFO] 10.244.1.2:38146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248518s
	[INFO] 10.244.1.2:54521 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160289s
	[INFO] 10.244.1.2:34985 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396473s
	[INFO] 10.244.1.2:37504 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127175s
	[INFO] 10.244.1.2:47786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089644s
	[INFO] 10.244.0.4:42865 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167315s
	[INFO] 10.244.2.2:37374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167385s
	[INFO] 10.244.2.2:33251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009522s
	[INFO] 10.244.1.2:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158704s
	[INFO] 10.244.1.2:36398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143215s
	[INFO] 10.244.1.2:60528 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012073s
	[INFO] 10.244.1.2:45057 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013653s
	[INFO] 10.244.0.4:55605 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153423s
	[INFO] 10.244.1.2:37595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218212s
	[INFO] 10.244.1.2:45054 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155156s
	[INFO] 10.244.1.2:45734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159775s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a32ba91e1ce5532513e9ffaf332fa65a9e291a1dcbbb147c7e8dc384017bfd85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46251 - 3158 "HINFO IN 4020314174239755005.6788368900148723181. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006083937s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6ed23280f4a5ee98a4f01e9712529e4f0da45e69d85ca4208e58b8c051827bb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39174 - 27518 "HINFO IN 5234567318487603077.6782029109910001331. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009507212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [cea68e46e7574bae817bbaa900672fea5a2e5dd9a2847843cb7b016b9ccc5c0d] <==
	[INFO] 10.244.0.4:36734 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153111s
	[INFO] 10.244.0.4:36918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002576888s
	[INFO] 10.244.2.2:52506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216481s
	[INFO] 10.244.2.2:41181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142291s
	[INFO] 10.244.1.2:41560 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185807s
	[INFO] 10.244.1.2:34843 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104567s
	[INFO] 10.244.1.2:36490 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226318s
	[INFO] 10.244.0.4:60091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107953s
	[INFO] 10.244.0.4:37327 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151724s
	[INFO] 10.244.0.4:35399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043972s
	[INFO] 10.244.2.2:59809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090745s
	[INFO] 10.244.2.2:40239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069623s
	[INFO] 10.244.0.4:36867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127937s
	[INFO] 10.244.0.4:35854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195121s
	[INFO] 10.244.0.4:56742 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109765s
	[INFO] 10.244.2.2:33696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132875s
	[INFO] 10.244.2.2:51474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149174s
	[INFO] 10.244.2.2:58642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010185s
	[INFO] 10.244.2.2:58203 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089769s
	[INFO] 10.244.1.2:54587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118471s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-504633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_45_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:04:07 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:04:07 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:04:07 +0000   Wed, 13 Mar 2024 23:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:04:07 +0000   Wed, 13 Mar 2024 23:45:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    ha-504633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13fd8f4b90794ddf8d3d6bdb9051c529
	  System UUID:                13fd8f4b-9079-4ddf-8d3d-6bdb9051c529
	  Boot ID:                    83daf814-565c-4717-8930-43f7c53558eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dx92g             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-5dd5756b68-dbkfv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-5dd5756b68-hh2kw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-504633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-8kvnb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-504633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-504633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-j56zl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-504633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-504633                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m17s              kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node ha-504633 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node ha-504633 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node ha-504633 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   NodeReady                18m                kubelet          Node ha-504633 status is now: NodeReady
	  Normal   RegisteredNode           16m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Warning  ContainerGCFailed        6m54s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m10s              node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           5m4s               node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	  Normal   RegisteredNode           3m58s              node-controller  Node ha-504633 event: Registered Node ha-504633 in Controller
	
	
	Name:               ha-504633-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_47_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:47:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:59:34 +0000   Wed, 13 Mar 2024 23:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-504633-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6ba1a02ba14580ac16771f2b426854
	  System UUID:                5f6ba1a0-2ba1-4580-ac16-771f2b426854
	  Boot ID:                    213c5b73-5c4f-4560-89e6-87c5c4535369
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zfjjt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-504633-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-f4pz8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-504633-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-504633-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4s9t5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-504633-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-504633-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  RegisteredNode           16m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-504633-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node ha-504633-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node ha-504633-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-504633-m02 event: Registered Node ha-504633-m02 in Controller
	
	
	Name:               ha-504633-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-504633-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=ha-504633
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_13T23_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:50:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-504633-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:01:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:02:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:02:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:02:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 00:01:06 +0000   Thu, 14 Mar 2024 00:02:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-504633-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d985b67edcea4528bf49bb9fe5eeb65e
	  System UUID:                d985b67e-dcea-4528-bf49-bb9fe5eeb65e
	  Boot ID:                    9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tcqdr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kindnet-dn6gl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-7hr7b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m31s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-504633-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   RegisteredNode           3m58s                  node-controller  Node ha-504633-m04 event: Registered Node ha-504633-m04 in Controller
	  Normal   NodeHasNoDiskPressure    3m35s (x3 over 3m35s)  kubelet          Node ha-504633-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m35s (x3 over 3m35s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     3m35s (x3 over 3m35s)  kubelet          Node ha-504633-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m35s (x2 over 3m35s)  kubelet          Node ha-504633-m04 has been rebooted, boot id: 9a8ee068-23d6-49e5-a453-ae122df76fb3
	  Normal   NodeReady                3m35s (x2 over 3m35s)  kubelet          Node ha-504633-m04 status is now: NodeReady
	  Normal   NodeNotReady             100s (x2 over 4m30s)   node-controller  Node ha-504633-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.715783] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.171003] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142829] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.235386] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Mar13 23:45] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.057845] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.706645] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.862236] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.155181] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.379152] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[ +12.986535] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.322868] kauditd_printk_skb: 43 callbacks suppressed
	[Mar13 23:46] kauditd_printk_skb: 27 callbacks suppressed
	[Mar13 23:58] systemd-fstab-generator[4216]: Ignoring "noauto" option for root device
	[  +0.153832] systemd-fstab-generator[4228]: Ignoring "noauto" option for root device
	[  +0.187781] systemd-fstab-generator[4242]: Ignoring "noauto" option for root device
	[  +0.149225] systemd-fstab-generator[4254]: Ignoring "noauto" option for root device
	[  +0.268609] systemd-fstab-generator[4278]: Ignoring "noauto" option for root device
	[  +0.830313] systemd-fstab-generator[4380]: Ignoring "noauto" option for root device
	[  +5.053578] kauditd_printk_skb: 132 callbacks suppressed
	[  +7.758209] kauditd_printk_skb: 80 callbacks suppressed
	[ +34.892345] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.328226] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [28e15c659f106ec11cbf33e7d38a368ab89ef257cca6e467d1fdb92800007b51] <==
	{"level":"info","ts":"2024-03-13T23:59:53.825578Z","caller":"traceutil/trace.go:171","msg":"trace[269743063] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"165.628751ms","start":"2024-03-13T23:59:53.659924Z","end":"2024-03-13T23:59:53.825552Z","steps":["trace[269743063] 'process raft request'  (duration: 165.485531ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:59:54.1971Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-13T23:59:54.197206Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.197253Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.214156Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.218035Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c25b0656f1ce3d71","to":"8a6ebbe0b7bc25b1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-13T23:59:54.218084Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:59:54.220119Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.779255Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.156:49200","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-14T00:00:49.826375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c25b0656f1ce3d71 switched to configuration voters=(14004794436732468593 17975950259062721749)"}
	{"level":"info","ts":"2024-03-14T00:00:49.826732Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"62f0e98a58b5dbcf","local-member-id":"c25b0656f1ce3d71","removed-remote-peer-id":"8a6ebbe0b7bc25b1","removed-remote-peer-urls":["https://192.168.39.156:2380"]}
	{"level":"info","ts":"2024-03-14T00:00:49.826883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.82719Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.827246Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.828215Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.828302Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.828787Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.82927Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","error":"context canceled"}
	{"level":"warn","ts":"2024-03-14T00:00:49.829587Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8a6ebbe0b7bc25b1","error":"failed to read 8a6ebbe0b7bc25b1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-14T00:00:49.829634Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.829868Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1","error":"context canceled"}
	{"level":"info","ts":"2024-03-14T00:00:49.82995Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.83027Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-14T00:00:49.83038Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c25b0656f1ce3d71","removed-remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"warn","ts":"2024-03-14T00:00:49.851219Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c25b0656f1ce3d71","remote-peer-id-stream-handler":"c25b0656f1ce3d71","remote-peer-id-from":"8a6ebbe0b7bc25b1"}
	
	
	==> etcd [ec04eb9f36ad1b702fc23358de142c7ed16a93d3ce790b0a52ad5d40294b6714] <==
	{"level":"info","ts":"2024-03-13T23:56:31.030267Z","caller":"traceutil/trace.go:171","msg":"trace[82422090] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"146.582127ms","start":"2024-03-13T23:56:30.883677Z","end":"2024-03-13T23:56:31.03026Z","steps":["trace[82422090] 'agreement among raft nodes before linearized reading'  (duration: 133.285904ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.030362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.824463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-13T23:56:31.030376Z","caller":"traceutil/trace.go:171","msg":"trace[30097541] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"117.845034ms","start":"2024-03-13T23:56:30.912526Z","end":"2024-03-13T23:56:31.030371Z","steps":["trace[30097541] 'agreement among raft nodes before linearized reading'  (duration: 117.823726ms)"],"step_count":1}
	WARNING: 2024/03/13 23:56:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-13T23:56:31.046603Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-13T23:56:31.046656Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.31:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-13T23:56:31.046727Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c25b0656f1ce3d71","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-13T23:56:31.046878Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.04692Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047134Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047383Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047473Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047537Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047579Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f97767851b864cd5"}
	{"level":"info","ts":"2024-03-13T23:56:31.047588Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047598Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.04764Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047766Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c25b0656f1ce3d71","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.047855Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a6ebbe0b7bc25b1"}
	{"level":"info","ts":"2024-03-13T23:56:31.051496Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051611Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.31:2380"}
	{"level":"info","ts":"2024-03-13T23:56:31.051643Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-504633","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.31:2380"],"advertise-client-urls":["https://192.168.39.31:2379"]}
	
	
	==> kernel <==
	 00:04:11 up 19 min,  0 users,  load average: 0.17, 0.33, 0.32
	Linux ha-504633 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [32eccfe2db8bfabef8a8005f153311f13f559c52a6c98993db8e931f66c75f48] <==
	I0314 00:03:26.268258       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:03:36.284828       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:03:36.284882       1 main.go:227] handling current node
	I0314 00:03:36.284902       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:03:36.284909       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:03:36.286369       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:03:36.286406       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:03:46.296565       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:03:46.296623       1 main.go:227] handling current node
	I0314 00:03:46.296637       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:03:46.296645       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:03:46.296774       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:03:46.296782       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:03:56.314384       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:03:56.314433       1 main.go:227] handling current node
	I0314 00:03:56.314444       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:03:56.314450       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:03:56.314671       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:03:56.314680       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	I0314 00:04:06.345702       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I0314 00:04:06.345755       1 main.go:227] handling current node
	I0314 00:04:06.345767       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0314 00:04:06.345774       1 main.go:250] Node ha-504633-m02 has CIDR [10.244.1.0/24] 
	I0314 00:04:06.345912       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0314 00:04:06.345946       1 main.go:250] Node ha-504633-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a733ab586b563b834a48ce0c9870376fc1759813c241e5ee49abbad48c7f018f] <==
	I0313 23:58:10.556238       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0313 23:58:10.558062       1 main.go:107] hostIP = 192.168.39.31
	podIP = 192.168.39.31
	I0313 23:58:10.558267       1 main.go:116] setting mtu 1500 for CNI 
	I0313 23:58:10.561039       1 main.go:146] kindnetd IP family: "ipv4"
	I0313 23:58:10.561117       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0313 23:58:12.048699       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:15.120735       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:26.122664       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0313 23:58:30.480524       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0313 23:58:33.552553       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0bb4395e019a757e73e77dc798eb75ad052c1496ef1516bbeae0a8462898b0d0] <==
	I0313 23:58:54.092382       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:54.092558       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:54.154312       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0313 23:58:54.165829       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0313 23:58:54.171516       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0313 23:58:54.171529       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0313 23:58:54.172185       1 shared_informer.go:318] Caches are synced for configmaps
	I0313 23:58:54.175389       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0313 23:58:54.175565       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0313 23:58:54.179153       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0313 23:58:54.188452       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.47]
	I0313 23:58:54.190384       1 controller.go:624] quota admission added evaluator for: endpoints
	I0313 23:58:54.192454       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0313 23:58:54.192514       1 aggregator.go:166] initial CRD sync complete...
	I0313 23:58:54.192536       1 autoregister_controller.go:141] Starting autoregister controller
	I0313 23:58:54.192544       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0313 23:58:54.192552       1 cache.go:39] Caches are synced for autoregister controller
	I0313 23:58:54.200231       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0313 23:58:54.207778       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0313 23:58:55.095788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0313 23:58:55.628745       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156 192.168.39.31 192.168.39.47]
	E0314 00:01:51.032748       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 00:01:51.033052       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 00:01:51.034500       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 00:01:51.034649       1 timeout.go:142] post-timeout activity - time-elapsed: 1.975754ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	
	
	==> kube-apiserver [be3d6f776a6b20ee3c1b32374c40385cd3b826094de44efd90e86b2c4581cb25] <==
	I0313 23:58:10.587833       1 options.go:220] external host was not specified, using 192.168.39.31
	I0313 23:58:10.589141       1 server.go:148] Version: v1.28.4
	I0313 23:58:10.589190       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:10.960512       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0313 23:58:10.972325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0313 23:58:10.972408       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0313 23:58:10.972706       1 instance.go:298] Using reconciler: lease
	W0313 23:58:30.947173       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0313 23:58:30.952347       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0313 23:58:30.973636       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6c12af0f98a849c99d6e370f55c6e0699d6fc1d2f39b5add04f5ffec77fe905a] <==
	I0314 00:00:46.633368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.52126ms"
	I0314 00:00:46.634223       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-sqk5k"
	I0314 00:00:46.676457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.969905ms"
	I0314 00:00:46.676803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.804µs"
	I0314 00:00:50.285091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.359559ms"
	I0314 00:00:50.285344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.947µs"
	I0314 00:01:11.423295       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-504633-m04"
	I0314 00:01:11.587285       1 event.go:307] "Event occurred" object="ha-504633-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-504633-m03 event: Removing Node ha-504633-m03 from Controller"
	E0314 00:01:26.523396       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523522       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523554       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523578       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523669       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:26.523698       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524759       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524809       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524818       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524824       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524830       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	E0314 00:01:46.524836       1 gc_controller.go:153] "Failed to get node" err="node \"ha-504633-m03\" not found" node="ha-504633-m03"
	I0314 00:02:00.279207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="269.317µs"
	I0314 00:02:00.292335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="107.819µs"
	I0314 00:02:00.303082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="123.322µs"
	I0314 00:02:30.376681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.158696ms"
	I0314 00:02:30.377328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="256.481µs"
	
	
	==> kube-controller-manager [e53161751ea00a66f2480817c80526cee6dd31ca848f84df4311daf5d04257e5] <==
	I0313 23:58:11.449358       1 serving.go:348] Generated self-signed cert in-memory
	I0313 23:58:11.925415       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0313 23:58:11.925496       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:11.927114       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0313 23:58:11.927301       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0313 23:58:11.928174       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0313 23:58:11.928326       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0313 23:58:31.981043       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.31:8443/healthz\": dial tcp 192.168.39.31:8443: connect: connection refused"
	
	
	==> kube-proxy [365fcf57ea4674ce7e3d6a71ff587d5fecd19c831c63a445ed1f62d97df8b791] <==
	I0313 23:58:11.279414       1 server_others.go:69] "Using iptables proxy"
	E0313 23:58:12.945927       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:16.017290       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:19.090373       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:25.232466       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:34.449562       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	E0313 23:58:52.880584       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-504633": dial tcp 192.168.39.254:8443: connect: no route to host
	I0313 23:58:52.883107       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0313 23:58:52.945876       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:58:52.945937       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:58:52.950525       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:58:52.951121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:58:52.952171       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:58:52.952216       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:58:52.957526       1 config.go:188] "Starting service config controller"
	I0313 23:58:52.957594       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:58:52.957651       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:58:52.957659       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:58:52.958592       1 config.go:315] "Starting node config controller"
	I0313 23:58:52.958637       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:58:54.958592       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0313 23:58:54.958700       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:58:54.958712       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [ce0dc1e514cfec2b4ca91fa3f285a3bb522fc459432ed331cf0845d22f934f6a] <==
	I0313 23:45:29.711578       1 server_others.go:69] "Using iptables proxy"
	I0313 23:45:29.730452       1 node.go:141] Successfully retrieved node IP: 192.168.39.31
	I0313 23:45:29.778135       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:45:29.778173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:45:29.781710       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:45:29.782511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:45:29.782796       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:45:29.782835       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:45:29.784428       1 config.go:188] "Starting service config controller"
	I0313 23:45:29.785222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:45:29.785343       1 config.go:315] "Starting node config controller"
	I0313 23:45:29.785372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:45:29.785796       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:45:29.785829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:45:29.885734       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:45:29.885761       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:45:29.886938       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03595624eed74fe8216cb9b912c39cd4397884a7270ffdc3599be60690202e33] <==
	W0313 23:56:27.311602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:27.311659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:27.533068       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0313 23:56:27.533165       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0313 23:56:27.604660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:56:27.604838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:56:27.754398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0313 23:56:27.754565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0313 23:56:27.772480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0313 23:56:27.772646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0313 23:56:27.791220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0313 23:56:27.791266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0313 23:56:27.793169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0313 23:56:27.793208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0313 23:56:28.027210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.027234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.092424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0313 23:56:28.092523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0313 23:56:28.939272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0313 23:56:28.939349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0313 23:56:29.169752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0313 23:56:29.169860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0313 23:56:31.002106       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0313 23:56:31.002261       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0313 23:56:31.002463       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [597de64e318a05c7b0e9649ed9ec4e3fc888f72c271b3ebe8bb9c3909b1b25ba] <==
	W0313 23:58:47.947622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:47.947699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.337094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.337168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.31:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.509652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.509716       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.31:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:48.904352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:48.904411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.31:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.047777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.047939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.31:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:49.433391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:49.433515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.31:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:50.731428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:50.731551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.31:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.059885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.060023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.31:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.726045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.726148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.31:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	W0313 23:58:51.991331       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	E0313 23:58:51.991397       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.31:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.31:8443: connect: connection refused
	I0313 23:59:07.387507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 00:00:46.490419       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-tcqdr\": pod busybox-5b5d89c9d6-tcqdr is already assigned to node \"ha-504633-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-tcqdr" node="ha-504633-m04"
	E0314 00:00:46.493621       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod eb2cd887-fa57-4342-b3ff-90cc3acd8c6e(default/busybox-5b5d89c9d6-tcqdr) wasn't assumed so cannot be forgotten"
	E0314 00:00:46.493889       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-tcqdr\": pod busybox-5b5d89c9d6-tcqdr is already assigned to node \"ha-504633-m04\"" pod="default/busybox-5b5d89c9d6-tcqdr"
	I0314 00:00:46.493961       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-tcqdr" node="ha-504633-m04"
	
	
	==> kubelet <==
	Mar 14 00:02:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:02:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:02:18 ha-504633 kubelet[1439]: I0314 00:02:18.777455    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:02:18 ha-504633 kubelet[1439]: E0314 00:02:18.777811    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:02:33 ha-504633 kubelet[1439]: I0314 00:02:33.777210    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:02:33 ha-504633 kubelet[1439]: E0314 00:02:33.777522    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:02:44 ha-504633 kubelet[1439]: I0314 00:02:44.777407    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:02:44 ha-504633 kubelet[1439]: E0314 00:02:44.778205    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:02:58 ha-504633 kubelet[1439]: I0314 00:02:58.776725    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:02:58 ha-504633 kubelet[1439]: E0314 00:02:58.778341    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:03:13 ha-504633 kubelet[1439]: I0314 00:03:13.777099    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:03:13 ha-504633 kubelet[1439]: E0314 00:03:13.778040    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:03:16 ha-504633 kubelet[1439]: E0314 00:03:16.828131    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:03:16 ha-504633 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:03:16 ha-504633 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:03:16 ha-504633 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:03:16 ha-504633 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:03:28 ha-504633 kubelet[1439]: I0314 00:03:28.776712    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:03:28 ha-504633 kubelet[1439]: E0314 00:03:28.777100    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:03:43 ha-504633 kubelet[1439]: I0314 00:03:43.776776    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:03:43 ha-504633 kubelet[1439]: E0314 00:03:43.777210    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:03:54 ha-504633 kubelet[1439]: I0314 00:03:54.777071    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:03:54 ha-504633 kubelet[1439]: E0314 00:03:54.779238    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	Mar 14 00:04:08 ha-504633 kubelet[1439]: I0314 00:04:08.777458    1439 scope.go:117] "RemoveContainer" containerID="ad22c9051903912b0528bc96e5e4a5cac871eb62b0f5a3bd7e0db235a15cbeec"
	Mar 14 00:04:08 ha-504633 kubelet[1439]: E0314 00:04:08.778503    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-504633_kube-system(b9f7ed25c0cb42b2cf61135e6a1c245f)\"" pod="kube-system/kube-vip-ha-504633" podUID="b9f7ed25c0cb42b2cf61135e6a1c245f"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:04:09.980479   30448 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-504633 -n ha-504633
helpers_test.go:261: (dbg) Run:  kubectl --context ha-504633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopCluster (142.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (307.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-507871
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-507871
E0314 00:19:44.448747   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-507871: exit status 82 (2m2.018292638s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-507871-m03"  ...
	* Stopping node "multinode-507871-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-507871" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-507871 --wait=true -v=8 --alsologtostderr
E0314 00:21:39.381246   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:23:36.336431   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-507871 --wait=true -v=8 --alsologtostderr: (3m3.310374516s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-507871
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-507871 -n multinode-507871
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-507871 logs -n 25: (1.635695675s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871:/home/docker/cp-test_multinode-507871-m02_multinode-507871.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871 sudo cat                                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m02_multinode-507871.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03:/home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871-m03 sudo cat                                   | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp testdata/cp-test.txt                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871:/home/docker/cp-test_multinode-507871-m03_multinode-507871.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871 sudo cat                                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m03_multinode-507871.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02:/home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871-m02 sudo cat                                   | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-507871 node stop m03                                                          | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	| node    | multinode-507871 node start                                                             | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-507871                                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:19 UTC |                     |
	| stop    | -p multinode-507871                                                                     | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:19 UTC |                     |
	| start   | -p multinode-507871                                                                     | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:21 UTC | 14 Mar 24 00:24 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-507871                                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:24 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:21:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:21:15.665094   39054 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:21:15.665226   39054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:21:15.665237   39054 out.go:304] Setting ErrFile to fd 2...
	I0314 00:21:15.665244   39054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:21:15.665430   39054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:21:15.665984   39054 out.go:298] Setting JSON to false
	I0314 00:21:15.666898   39054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3819,"bootTime":1710371857,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:21:15.666955   39054 start.go:139] virtualization: kvm guest
	I0314 00:21:15.669104   39054 out.go:177] * [multinode-507871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:21:15.670803   39054 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:21:15.670850   39054 notify.go:220] Checking for updates...
	I0314 00:21:15.672145   39054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:21:15.673749   39054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:21:15.674963   39054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:21:15.676209   39054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:21:15.677459   39054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:21:15.679164   39054 config.go:182] Loaded profile config "multinode-507871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:21:15.679268   39054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:21:15.679669   39054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:21:15.679719   39054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:21:15.694867   39054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0314 00:21:15.695295   39054 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:21:15.695809   39054 main.go:141] libmachine: Using API Version  1
	I0314 00:21:15.695825   39054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:21:15.696160   39054 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:21:15.696431   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.732193   39054 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:21:15.733563   39054 start.go:297] selected driver: kvm2
	I0314 00:21:15.733578   39054 start.go:901] validating driver "kvm2" against &{Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:21:15.733728   39054 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:21:15.734047   39054 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:21:15.734110   39054 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:21:15.749198   39054 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:21:15.750219   39054 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:21:15.750341   39054 cni.go:84] Creating CNI manager for ""
	I0314 00:21:15.750367   39054 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 00:21:15.750458   39054 start.go:340] cluster config:
	{Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:21:15.750701   39054 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:21:15.752608   39054 out.go:177] * Starting "multinode-507871" primary control-plane node in "multinode-507871" cluster
	I0314 00:21:15.754181   39054 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:21:15.754218   39054 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 00:21:15.754225   39054 cache.go:56] Caching tarball of preloaded images
	I0314 00:21:15.754341   39054 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:21:15.754361   39054 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 00:21:15.754481   39054 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/config.json ...
	I0314 00:21:15.754679   39054 start.go:360] acquireMachinesLock for multinode-507871: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:21:15.754718   39054 start.go:364] duration metric: took 21.911µs to acquireMachinesLock for "multinode-507871"
	I0314 00:21:15.754735   39054 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:21:15.754742   39054 fix.go:54] fixHost starting: 
	I0314 00:21:15.755030   39054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:21:15.755061   39054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:21:15.769002   39054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0314 00:21:15.769472   39054 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:21:15.770004   39054 main.go:141] libmachine: Using API Version  1
	I0314 00:21:15.770030   39054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:21:15.770382   39054 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:21:15.770597   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.770790   39054 main.go:141] libmachine: (multinode-507871) Calling .GetState
	I0314 00:21:15.772509   39054 fix.go:112] recreateIfNeeded on multinode-507871: state=Running err=<nil>
	W0314 00:21:15.772525   39054 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:21:15.774755   39054 out.go:177] * Updating the running kvm2 "multinode-507871" VM ...
	I0314 00:21:15.776347   39054 machine.go:94] provisionDockerMachine start ...
	I0314 00:21:15.776371   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.776642   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:15.779523   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.779991   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:15.780018   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.780148   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:15.780344   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.780494   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.780692   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:15.780853   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:15.781039   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:15.781050   39054 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:21:15.900769   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-507871
	
	I0314 00:21:15.900805   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:15.901035   39054 buildroot.go:166] provisioning hostname "multinode-507871"
	I0314 00:21:15.901060   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:15.901297   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:15.904312   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.904745   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:15.904776   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.905121   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:15.905324   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.905488   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.905708   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:15.906009   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:15.906192   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:15.906208   39054 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-507871 && echo "multinode-507871" | sudo tee /etc/hostname
	I0314 00:21:16.041344   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-507871
	
	I0314 00:21:16.041383   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.044500   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.045011   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.045032   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.045225   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.045412   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.045647   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.045795   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.046004   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:16.046237   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:16.046263   39054 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-507871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-507871/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-507871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:21:16.176217   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:21:16.176249   39054 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:21:16.176271   39054 buildroot.go:174] setting up certificates
	I0314 00:21:16.176284   39054 provision.go:84] configureAuth start
	I0314 00:21:16.176364   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:16.176669   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:21:16.179919   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.180384   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.180425   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.180636   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.182725   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.183088   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.183111   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.183274   39054 provision.go:143] copyHostCerts
	I0314 00:21:16.183299   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:21:16.183334   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:21:16.183343   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:21:16.183409   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:21:16.183494   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:21:16.183511   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:21:16.183518   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:21:16.183541   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:21:16.183594   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:21:16.183614   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:21:16.183621   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:21:16.183640   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:21:16.183703   39054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.multinode-507871 san=[127.0.0.1 192.168.39.60 localhost minikube multinode-507871]
	I0314 00:21:16.376767   39054 provision.go:177] copyRemoteCerts
	I0314 00:21:16.376835   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:21:16.376855   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.379603   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.380024   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.380054   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.380195   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.380350   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.380486   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.380612   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:21:16.470272   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 00:21:16.470350   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:21:16.497147   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 00:21:16.497235   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 00:21:16.531695   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 00:21:16.531774   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:21:16.561281   39054 provision.go:87] duration metric: took 384.986211ms to configureAuth
	I0314 00:21:16.561308   39054 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:21:16.561511   39054 config.go:182] Loaded profile config "multinode-507871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:21:16.561583   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.564259   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.564736   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.564764   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.564967   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.565147   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.565275   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.565435   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.565569   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:16.565766   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:16.565783   39054 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:22:47.424781   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:22:47.424810   39054 machine.go:97] duration metric: took 1m31.64844843s to provisionDockerMachine
	I0314 00:22:47.424826   39054 start.go:293] postStartSetup for "multinode-507871" (driver="kvm2")
	I0314 00:22:47.424855   39054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:22:47.424882   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.425221   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:22:47.425264   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.428387   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.428781   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.428806   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.428953   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.429135   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.429324   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.429466   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.519091   39054 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:22:47.524059   39054 command_runner.go:130] > NAME=Buildroot
	I0314 00:22:47.524091   39054 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 00:22:47.524134   39054 command_runner.go:130] > ID=buildroot
	I0314 00:22:47.524143   39054 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 00:22:47.524151   39054 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 00:22:47.524211   39054 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:22:47.524234   39054 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:22:47.524316   39054 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:22:47.524408   39054 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:22:47.524418   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0314 00:22:47.524545   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:22:47.535141   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:22:47.563284   39054 start.go:296] duration metric: took 138.442833ms for postStartSetup
	I0314 00:22:47.563354   39054 fix.go:56] duration metric: took 1m31.808587962s for fixHost
	I0314 00:22:47.563377   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.566331   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.566821   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.566846   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.567011   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.567224   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.567390   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.567558   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.567738   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:22:47.567956   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:22:47.567970   39054 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:22:47.680055   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710375767.648719361
	
	I0314 00:22:47.680077   39054 fix.go:216] guest clock: 1710375767.648719361
	I0314 00:22:47.680083   39054 fix.go:229] Guest: 2024-03-14 00:22:47.648719361 +0000 UTC Remote: 2024-03-14 00:22:47.563360019 +0000 UTC m=+91.948892899 (delta=85.359342ms)
	I0314 00:22:47.680128   39054 fix.go:200] guest clock delta is within tolerance: 85.359342ms
	I0314 00:22:47.680134   39054 start.go:83] releasing machines lock for "multinode-507871", held for 1m31.925406939s
	I0314 00:22:47.680158   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.680415   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:22:47.683326   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.683802   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.683834   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.684001   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684581   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684737   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684815   39054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:22:47.684876   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.684979   39054 ssh_runner.go:195] Run: cat /version.json
	I0314 00:22:47.684997   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.687728   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688014   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688188   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.688215   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688351   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.688368   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.688374   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688554   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.688570   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.688734   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.688743   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.688927   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.688969   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.689048   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.768061   39054 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 00:22:47.768240   39054 ssh_runner.go:195] Run: systemctl --version
	I0314 00:22:47.805263   39054 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 00:22:47.806054   39054 command_runner.go:130] > systemd 252 (252)
	I0314 00:22:47.806088   39054 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 00:22:47.806150   39054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:22:47.973520   39054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 00:22:47.980948   39054 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 00:22:47.981271   39054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:22:47.981346   39054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:22:47.990878   39054 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 00:22:47.990900   39054 start.go:494] detecting cgroup driver to use...
	I0314 00:22:47.990960   39054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:22:48.007037   39054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:22:48.021313   39054 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:22:48.021374   39054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:22:48.035201   39054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:22:48.048907   39054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:22:48.190809   39054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:22:48.355710   39054 docker.go:233] disabling docker service ...
	I0314 00:22:48.355784   39054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:22:48.377966   39054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:22:48.393677   39054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:22:48.542144   39054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:22:48.688125   39054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:22:48.703800   39054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:22:48.723432   39054 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0314 00:22:48.723820   39054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:22:48.723872   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.737298   39054 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:22:48.737376   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.748681   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.759991   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.770814   39054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:22:48.782152   39054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:22:48.791622   39054 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 00:22:48.791884   39054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:22:48.801770   39054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:22:48.948433   39054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:22:49.216474   39054 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:22:49.216543   39054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:22:49.221707   39054 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0314 00:22:49.221728   39054 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 00:22:49.221737   39054 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0314 00:22:49.221749   39054 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 00:22:49.221757   39054 command_runner.go:130] > Access: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221766   39054 command_runner.go:130] > Modify: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221774   39054 command_runner.go:130] > Change: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221780   39054 command_runner.go:130] >  Birth: -
	I0314 00:22:49.221796   39054 start.go:562] Will wait 60s for crictl version
	I0314 00:22:49.221851   39054 ssh_runner.go:195] Run: which crictl
	I0314 00:22:49.225809   39054 command_runner.go:130] > /usr/bin/crictl
	I0314 00:22:49.225982   39054 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:22:49.267283   39054 command_runner.go:130] > Version:  0.1.0
	I0314 00:22:49.267309   39054 command_runner.go:130] > RuntimeName:  cri-o
	I0314 00:22:49.267316   39054 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0314 00:22:49.267329   39054 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 00:22:49.267349   39054 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:22:49.267432   39054 ssh_runner.go:195] Run: crio --version
	I0314 00:22:49.304291   39054 command_runner.go:130] > crio version 1.29.1
	I0314 00:22:49.304316   39054 command_runner.go:130] > Version:        1.29.1
	I0314 00:22:49.304322   39054 command_runner.go:130] > GitCommit:      unknown
	I0314 00:22:49.304326   39054 command_runner.go:130] > GitCommitDate:  unknown
	I0314 00:22:49.304330   39054 command_runner.go:130] > GitTreeState:   clean
	I0314 00:22:49.304343   39054 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 00:22:49.304350   39054 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 00:22:49.304357   39054 command_runner.go:130] > Compiler:       gc
	I0314 00:22:49.304364   39054 command_runner.go:130] > Platform:       linux/amd64
	I0314 00:22:49.304370   39054 command_runner.go:130] > Linkmode:       dynamic
	I0314 00:22:49.304380   39054 command_runner.go:130] > BuildTags:      
	I0314 00:22:49.304388   39054 command_runner.go:130] >   containers_image_ostree_stub
	I0314 00:22:49.304395   39054 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 00:22:49.304402   39054 command_runner.go:130] >   btrfs_noversion
	I0314 00:22:49.304409   39054 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 00:22:49.304432   39054 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 00:22:49.304439   39054 command_runner.go:130] >   seccomp
	I0314 00:22:49.304446   39054 command_runner.go:130] > LDFlags:          unknown
	I0314 00:22:49.304453   39054 command_runner.go:130] > SeccompEnabled:   true
	I0314 00:22:49.304459   39054 command_runner.go:130] > AppArmorEnabled:  false
	I0314 00:22:49.304560   39054 ssh_runner.go:195] Run: crio --version
	I0314 00:22:49.335307   39054 command_runner.go:130] > crio version 1.29.1
	I0314 00:22:49.335334   39054 command_runner.go:130] > Version:        1.29.1
	I0314 00:22:49.335357   39054 command_runner.go:130] > GitCommit:      unknown
	I0314 00:22:49.335364   39054 command_runner.go:130] > GitCommitDate:  unknown
	I0314 00:22:49.335370   39054 command_runner.go:130] > GitTreeState:   clean
	I0314 00:22:49.335378   39054 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 00:22:49.335390   39054 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 00:22:49.335397   39054 command_runner.go:130] > Compiler:       gc
	I0314 00:22:49.335406   39054 command_runner.go:130] > Platform:       linux/amd64
	I0314 00:22:49.335413   39054 command_runner.go:130] > Linkmode:       dynamic
	I0314 00:22:49.335429   39054 command_runner.go:130] > BuildTags:      
	I0314 00:22:49.335438   39054 command_runner.go:130] >   containers_image_ostree_stub
	I0314 00:22:49.335447   39054 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 00:22:49.335454   39054 command_runner.go:130] >   btrfs_noversion
	I0314 00:22:49.335463   39054 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 00:22:49.335474   39054 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 00:22:49.335479   39054 command_runner.go:130] >   seccomp
	I0314 00:22:49.335486   39054 command_runner.go:130] > LDFlags:          unknown
	I0314 00:22:49.335495   39054 command_runner.go:130] > SeccompEnabled:   true
	I0314 00:22:49.335500   39054 command_runner.go:130] > AppArmorEnabled:  false
	I0314 00:22:49.338472   39054 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:22:49.339951   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:22:49.342449   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:49.342823   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:49.342854   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:49.343095   39054 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:22:49.347675   39054 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0314 00:22:49.347888   39054 kubeadm.go:877] updating cluster {Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:22:49.348020   39054 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:22:49.348086   39054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:22:49.394096   39054 command_runner.go:130] > {
	I0314 00:22:49.394121   39054 command_runner.go:130] >   "images": [
	I0314 00:22:49.394126   39054 command_runner.go:130] >     {
	I0314 00:22:49.394138   39054 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 00:22:49.394144   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394153   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 00:22:49.394159   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394165   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394176   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 00:22:49.394188   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 00:22:49.394195   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394202   39054 command_runner.go:130] >       "size": "65258016",
	I0314 00:22:49.394210   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394219   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394238   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394249   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394256   39054 command_runner.go:130] >     },
	I0314 00:22:49.394262   39054 command_runner.go:130] >     {
	I0314 00:22:49.394273   39054 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 00:22:49.394283   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394293   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 00:22:49.394315   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394322   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394335   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 00:22:49.394349   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 00:22:49.394359   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394366   39054 command_runner.go:130] >       "size": "65291810",
	I0314 00:22:49.394375   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394391   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394402   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394412   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394419   39054 command_runner.go:130] >     },
	I0314 00:22:49.394428   39054 command_runner.go:130] >     {
	I0314 00:22:49.394440   39054 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 00:22:49.394451   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394464   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 00:22:49.394470   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394478   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394494   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 00:22:49.394508   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 00:22:49.394517   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394525   39054 command_runner.go:130] >       "size": "1363676",
	I0314 00:22:49.394534   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394540   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394545   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394551   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394557   39054 command_runner.go:130] >     },
	I0314 00:22:49.394562   39054 command_runner.go:130] >     {
	I0314 00:22:49.394575   39054 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 00:22:49.394585   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394595   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 00:22:49.394604   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394611   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394625   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 00:22:49.394649   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 00:22:49.394659   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394667   39054 command_runner.go:130] >       "size": "31470524",
	I0314 00:22:49.394683   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394693   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394700   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394710   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394717   39054 command_runner.go:130] >     },
	I0314 00:22:49.394725   39054 command_runner.go:130] >     {
	I0314 00:22:49.394736   39054 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 00:22:49.394753   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394780   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 00:22:49.394787   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394794   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394815   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 00:22:49.394831   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 00:22:49.394841   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394851   39054 command_runner.go:130] >       "size": "53621675",
	I0314 00:22:49.394861   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394868   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394875   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394885   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394891   39054 command_runner.go:130] >     },
	I0314 00:22:49.394898   39054 command_runner.go:130] >     {
	I0314 00:22:49.394911   39054 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 00:22:49.394922   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394935   39054 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 00:22:49.394945   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394953   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394968   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 00:22:49.394983   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 00:22:49.394991   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394999   39054 command_runner.go:130] >       "size": "295456551",
	I0314 00:22:49.395008   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395017   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395026   39054 command_runner.go:130] >       },
	I0314 00:22:49.395038   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395047   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395054   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395070   39054 command_runner.go:130] >     },
	I0314 00:22:49.395079   39054 command_runner.go:130] >     {
	I0314 00:22:49.395089   39054 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 00:22:49.395099   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395108   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 00:22:49.395116   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395123   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395139   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 00:22:49.395154   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 00:22:49.395163   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395170   39054 command_runner.go:130] >       "size": "127226832",
	I0314 00:22:49.395180   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395187   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395196   39054 command_runner.go:130] >       },
	I0314 00:22:49.395203   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395211   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395218   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395227   39054 command_runner.go:130] >     },
	I0314 00:22:49.395233   39054 command_runner.go:130] >     {
	I0314 00:22:49.395244   39054 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 00:22:49.395254   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395263   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 00:22:49.395272   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395279   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395312   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 00:22:49.395332   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 00:22:49.395338   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395344   39054 command_runner.go:130] >       "size": "123261750",
	I0314 00:22:49.395353   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395360   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395369   39054 command_runner.go:130] >       },
	I0314 00:22:49.395376   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395386   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395394   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395402   39054 command_runner.go:130] >     },
	I0314 00:22:49.395408   39054 command_runner.go:130] >     {
	I0314 00:22:49.395426   39054 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 00:22:49.395437   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395449   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 00:22:49.395456   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395466   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395475   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 00:22:49.395485   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 00:22:49.395490   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395497   39054 command_runner.go:130] >       "size": "74749335",
	I0314 00:22:49.395503   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.395509   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395515   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395522   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395527   39054 command_runner.go:130] >     },
	I0314 00:22:49.395533   39054 command_runner.go:130] >     {
	I0314 00:22:49.395543   39054 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 00:22:49.395549   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395558   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 00:22:49.395564   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395571   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395582   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 00:22:49.395594   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 00:22:49.395603   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395612   39054 command_runner.go:130] >       "size": "61551410",
	I0314 00:22:49.395620   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395627   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395636   39054 command_runner.go:130] >       },
	I0314 00:22:49.395644   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395653   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395661   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395669   39054 command_runner.go:130] >     },
	I0314 00:22:49.395675   39054 command_runner.go:130] >     {
	I0314 00:22:49.395689   39054 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 00:22:49.395698   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395705   39054 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 00:22:49.395711   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395728   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395744   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 00:22:49.395759   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 00:22:49.395768   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395775   39054 command_runner.go:130] >       "size": "750414",
	I0314 00:22:49.395784   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395791   39054 command_runner.go:130] >         "value": "65535"
	I0314 00:22:49.395799   39054 command_runner.go:130] >       },
	I0314 00:22:49.395807   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395816   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395825   39054 command_runner.go:130] >       "pinned": true
	I0314 00:22:49.395833   39054 command_runner.go:130] >     }
	I0314 00:22:49.395840   39054 command_runner.go:130] >   ]
	I0314 00:22:49.395846   39054 command_runner.go:130] > }
	I0314 00:22:49.396041   39054 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:22:49.396054   39054 crio.go:415] Images already preloaded, skipping extraction
	I0314 00:22:49.396112   39054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:22:49.432649   39054 command_runner.go:130] > {
	I0314 00:22:49.432669   39054 command_runner.go:130] >   "images": [
	I0314 00:22:49.432674   39054 command_runner.go:130] >     {
	I0314 00:22:49.432681   39054 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 00:22:49.432687   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432692   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 00:22:49.432696   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432699   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432708   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 00:22:49.432715   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 00:22:49.432719   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432723   39054 command_runner.go:130] >       "size": "65258016",
	I0314 00:22:49.432737   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432743   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432753   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432763   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432768   39054 command_runner.go:130] >     },
	I0314 00:22:49.432773   39054 command_runner.go:130] >     {
	I0314 00:22:49.432782   39054 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 00:22:49.432788   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432798   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 00:22:49.432804   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432810   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432824   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 00:22:49.432837   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 00:22:49.432846   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432856   39054 command_runner.go:130] >       "size": "65291810",
	I0314 00:22:49.432862   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432880   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432886   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432890   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432894   39054 command_runner.go:130] >     },
	I0314 00:22:49.432897   39054 command_runner.go:130] >     {
	I0314 00:22:49.432903   39054 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 00:22:49.432907   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432912   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 00:22:49.432916   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432925   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432935   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 00:22:49.432942   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 00:22:49.432946   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432950   39054 command_runner.go:130] >       "size": "1363676",
	I0314 00:22:49.432954   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432958   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432964   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432969   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432972   39054 command_runner.go:130] >     },
	I0314 00:22:49.432975   39054 command_runner.go:130] >     {
	I0314 00:22:49.432985   39054 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 00:22:49.432992   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432997   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 00:22:49.433000   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433004   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433012   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 00:22:49.433030   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 00:22:49.433041   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433045   39054 command_runner.go:130] >       "size": "31470524",
	I0314 00:22:49.433052   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433057   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433060   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433064   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433067   39054 command_runner.go:130] >     },
	I0314 00:22:49.433070   39054 command_runner.go:130] >     {
	I0314 00:22:49.433087   39054 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 00:22:49.433094   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433098   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 00:22:49.433102   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433106   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433113   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 00:22:49.433123   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 00:22:49.433127   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433130   39054 command_runner.go:130] >       "size": "53621675",
	I0314 00:22:49.433134   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433138   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433142   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433147   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433150   39054 command_runner.go:130] >     },
	I0314 00:22:49.433153   39054 command_runner.go:130] >     {
	I0314 00:22:49.433159   39054 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 00:22:49.433162   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433167   39054 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 00:22:49.433170   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433174   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433182   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 00:22:49.433194   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 00:22:49.433205   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433209   39054 command_runner.go:130] >       "size": "295456551",
	I0314 00:22:49.433212   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433215   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433220   39054 command_runner.go:130] >       },
	I0314 00:22:49.433224   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433227   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433231   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433234   39054 command_runner.go:130] >     },
	I0314 00:22:49.433237   39054 command_runner.go:130] >     {
	I0314 00:22:49.433243   39054 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 00:22:49.433247   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433251   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 00:22:49.433255   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433259   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433267   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 00:22:49.433277   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 00:22:49.433280   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433284   39054 command_runner.go:130] >       "size": "127226832",
	I0314 00:22:49.433290   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433294   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433297   39054 command_runner.go:130] >       },
	I0314 00:22:49.433303   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433307   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433313   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433316   39054 command_runner.go:130] >     },
	I0314 00:22:49.433320   39054 command_runner.go:130] >     {
	I0314 00:22:49.433326   39054 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 00:22:49.433332   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433338   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 00:22:49.433346   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433353   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433392   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 00:22:49.433407   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 00:22:49.433413   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433426   39054 command_runner.go:130] >       "size": "123261750",
	I0314 00:22:49.433434   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433439   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433445   39054 command_runner.go:130] >       },
	I0314 00:22:49.433448   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433452   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433456   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433459   39054 command_runner.go:130] >     },
	I0314 00:22:49.433462   39054 command_runner.go:130] >     {
	I0314 00:22:49.433471   39054 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 00:22:49.433476   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433481   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 00:22:49.433485   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433506   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433514   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 00:22:49.433523   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 00:22:49.433529   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433535   39054 command_runner.go:130] >       "size": "74749335",
	I0314 00:22:49.433539   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433543   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433546   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433550   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433553   39054 command_runner.go:130] >     },
	I0314 00:22:49.433557   39054 command_runner.go:130] >     {
	I0314 00:22:49.433562   39054 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 00:22:49.433567   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433571   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 00:22:49.433577   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433581   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433588   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 00:22:49.433597   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 00:22:49.433601   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433605   39054 command_runner.go:130] >       "size": "61551410",
	I0314 00:22:49.433609   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433613   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433618   39054 command_runner.go:130] >       },
	I0314 00:22:49.433627   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433633   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433636   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433640   39054 command_runner.go:130] >     },
	I0314 00:22:49.433643   39054 command_runner.go:130] >     {
	I0314 00:22:49.433649   39054 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 00:22:49.433653   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433657   39054 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 00:22:49.433661   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433665   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433677   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 00:22:49.433686   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 00:22:49.433689   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433693   39054 command_runner.go:130] >       "size": "750414",
	I0314 00:22:49.433697   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433701   39054 command_runner.go:130] >         "value": "65535"
	I0314 00:22:49.433706   39054 command_runner.go:130] >       },
	I0314 00:22:49.433710   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433716   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433720   39054 command_runner.go:130] >       "pinned": true
	I0314 00:22:49.433726   39054 command_runner.go:130] >     }
	I0314 00:22:49.433729   39054 command_runner.go:130] >   ]
	I0314 00:22:49.433732   39054 command_runner.go:130] > }
	I0314 00:22:49.433848   39054 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:22:49.433859   39054 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:22:49.433866   39054 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.28.4 crio true true} ...
	I0314 00:22:49.433957   39054 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-507871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:22:49.434020   39054 ssh_runner.go:195] Run: crio config
	I0314 00:22:49.469042   39054 command_runner.go:130] ! time="2024-03-14 00:22:49.437811878Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0314 00:22:49.480271   39054 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0314 00:22:49.487841   39054 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0314 00:22:49.487865   39054 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0314 00:22:49.487872   39054 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0314 00:22:49.487875   39054 command_runner.go:130] > #
	I0314 00:22:49.487881   39054 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0314 00:22:49.487887   39054 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0314 00:22:49.487892   39054 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0314 00:22:49.487905   39054 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0314 00:22:49.487911   39054 command_runner.go:130] > # reload'.
	I0314 00:22:49.487919   39054 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0314 00:22:49.487932   39054 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0314 00:22:49.487941   39054 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0314 00:22:49.487951   39054 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0314 00:22:49.487957   39054 command_runner.go:130] > [crio]
	I0314 00:22:49.487967   39054 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0314 00:22:49.487975   39054 command_runner.go:130] > # containers images, in this directory.
	I0314 00:22:49.487983   39054 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0314 00:22:49.487993   39054 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0314 00:22:49.488012   39054 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0314 00:22:49.488020   39054 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0314 00:22:49.488027   39054 command_runner.go:130] > # imagestore = ""
	I0314 00:22:49.488033   39054 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0314 00:22:49.488039   39054 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0314 00:22:49.488045   39054 command_runner.go:130] > storage_driver = "overlay"
	I0314 00:22:49.488054   39054 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0314 00:22:49.488065   39054 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0314 00:22:49.488076   39054 command_runner.go:130] > storage_option = [
	I0314 00:22:49.488086   39054 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0314 00:22:49.488089   39054 command_runner.go:130] > ]
	I0314 00:22:49.488098   39054 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0314 00:22:49.488104   39054 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0314 00:22:49.488111   39054 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0314 00:22:49.488116   39054 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0314 00:22:49.488123   39054 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0314 00:22:49.488128   39054 command_runner.go:130] > # always happen on a node reboot
	I0314 00:22:49.488136   39054 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0314 00:22:49.488155   39054 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0314 00:22:49.488176   39054 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0314 00:22:49.488184   39054 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0314 00:22:49.488192   39054 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0314 00:22:49.488202   39054 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0314 00:22:49.488212   39054 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0314 00:22:49.488218   39054 command_runner.go:130] > # internal_wipe = true
	I0314 00:22:49.488231   39054 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0314 00:22:49.488243   39054 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0314 00:22:49.488252   39054 command_runner.go:130] > # internal_repair = false
	I0314 00:22:49.488264   39054 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0314 00:22:49.488276   39054 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0314 00:22:49.488288   39054 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0314 00:22:49.488297   39054 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0314 00:22:49.488303   39054 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0314 00:22:49.488309   39054 command_runner.go:130] > [crio.api]
	I0314 00:22:49.488315   39054 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0314 00:22:49.488324   39054 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0314 00:22:49.488337   39054 command_runner.go:130] > # IP address on which the stream server will listen.
	I0314 00:22:49.488349   39054 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0314 00:22:49.488363   39054 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0314 00:22:49.488374   39054 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0314 00:22:49.488383   39054 command_runner.go:130] > # stream_port = "0"
	I0314 00:22:49.488394   39054 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0314 00:22:49.488401   39054 command_runner.go:130] > # stream_enable_tls = false
	I0314 00:22:49.488410   39054 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0314 00:22:49.488420   39054 command_runner.go:130] > # stream_idle_timeout = ""
	I0314 00:22:49.488433   39054 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0314 00:22:49.488449   39054 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0314 00:22:49.488457   39054 command_runner.go:130] > # minutes.
	I0314 00:22:49.488466   39054 command_runner.go:130] > # stream_tls_cert = ""
	I0314 00:22:49.488477   39054 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0314 00:22:49.488486   39054 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0314 00:22:49.488495   39054 command_runner.go:130] > # stream_tls_key = ""
	I0314 00:22:49.488509   39054 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0314 00:22:49.488521   39054 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0314 00:22:49.488551   39054 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0314 00:22:49.488561   39054 command_runner.go:130] > # stream_tls_ca = ""
	I0314 00:22:49.488570   39054 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 00:22:49.488578   39054 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0314 00:22:49.488596   39054 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 00:22:49.488607   39054 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0314 00:22:49.488620   39054 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0314 00:22:49.488637   39054 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0314 00:22:49.488651   39054 command_runner.go:130] > [crio.runtime]
	I0314 00:22:49.488660   39054 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0314 00:22:49.488671   39054 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0314 00:22:49.488680   39054 command_runner.go:130] > # "nofile=1024:2048"
	I0314 00:22:49.488693   39054 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0314 00:22:49.488703   39054 command_runner.go:130] > # default_ulimits = [
	I0314 00:22:49.488711   39054 command_runner.go:130] > # ]
	I0314 00:22:49.488724   39054 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0314 00:22:49.488733   39054 command_runner.go:130] > # no_pivot = false
	I0314 00:22:49.488741   39054 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0314 00:22:49.488750   39054 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0314 00:22:49.488761   39054 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0314 00:22:49.488774   39054 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0314 00:22:49.488785   39054 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0314 00:22:49.488797   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 00:22:49.488807   39054 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0314 00:22:49.488816   39054 command_runner.go:130] > # Cgroup setting for conmon
	I0314 00:22:49.488828   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0314 00:22:49.488835   39054 command_runner.go:130] > conmon_cgroup = "pod"
	I0314 00:22:49.488843   39054 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0314 00:22:49.488855   39054 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0314 00:22:49.488874   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 00:22:49.488883   39054 command_runner.go:130] > conmon_env = [
	I0314 00:22:49.488894   39054 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 00:22:49.488902   39054 command_runner.go:130] > ]
	I0314 00:22:49.488913   39054 command_runner.go:130] > # Additional environment variables to set for all the
	I0314 00:22:49.488921   39054 command_runner.go:130] > # containers. These are overridden if set in the
	I0314 00:22:49.488929   39054 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0314 00:22:49.488939   39054 command_runner.go:130] > # default_env = [
	I0314 00:22:49.488948   39054 command_runner.go:130] > # ]
	I0314 00:22:49.488960   39054 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0314 00:22:49.488975   39054 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0314 00:22:49.488983   39054 command_runner.go:130] > # selinux = false
	I0314 00:22:49.488993   39054 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0314 00:22:49.489003   39054 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0314 00:22:49.489019   39054 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0314 00:22:49.489030   39054 command_runner.go:130] > # seccomp_profile = ""
	I0314 00:22:49.489039   39054 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0314 00:22:49.489050   39054 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0314 00:22:49.489062   39054 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0314 00:22:49.489073   39054 command_runner.go:130] > # which might increase security.
	I0314 00:22:49.489080   39054 command_runner.go:130] > # This option is currently deprecated,
	I0314 00:22:49.489090   39054 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0314 00:22:49.489098   39054 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0314 00:22:49.489112   39054 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0314 00:22:49.489125   39054 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0314 00:22:49.489138   39054 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0314 00:22:49.489150   39054 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0314 00:22:49.489162   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.489172   39054 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0314 00:22:49.489180   39054 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0314 00:22:49.489189   39054 command_runner.go:130] > # the cgroup blockio controller.
	I0314 00:22:49.489199   39054 command_runner.go:130] > # blockio_config_file = ""
	I0314 00:22:49.489213   39054 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0314 00:22:49.489222   39054 command_runner.go:130] > # blockio parameters.
	I0314 00:22:49.489231   39054 command_runner.go:130] > # blockio_reload = false
	I0314 00:22:49.489245   39054 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0314 00:22:49.489254   39054 command_runner.go:130] > # irqbalance daemon.
	I0314 00:22:49.489266   39054 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0314 00:22:49.489280   39054 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0314 00:22:49.489294   39054 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0314 00:22:49.489307   39054 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0314 00:22:49.489320   39054 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0314 00:22:49.489333   39054 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0314 00:22:49.489344   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.489351   39054 command_runner.go:130] > # rdt_config_file = ""
	I0314 00:22:49.489358   39054 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0314 00:22:49.489367   39054 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0314 00:22:49.489407   39054 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0314 00:22:49.489418   39054 command_runner.go:130] > # separate_pull_cgroup = ""
	I0314 00:22:49.489428   39054 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0314 00:22:49.489443   39054 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0314 00:22:49.489451   39054 command_runner.go:130] > # will be added.
	I0314 00:22:49.489462   39054 command_runner.go:130] > # default_capabilities = [
	I0314 00:22:49.489471   39054 command_runner.go:130] > # 	"CHOWN",
	I0314 00:22:49.489480   39054 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0314 00:22:49.489489   39054 command_runner.go:130] > # 	"FSETID",
	I0314 00:22:49.489498   39054 command_runner.go:130] > # 	"FOWNER",
	I0314 00:22:49.489506   39054 command_runner.go:130] > # 	"SETGID",
	I0314 00:22:49.489515   39054 command_runner.go:130] > # 	"SETUID",
	I0314 00:22:49.489522   39054 command_runner.go:130] > # 	"SETPCAP",
	I0314 00:22:49.489526   39054 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0314 00:22:49.489534   39054 command_runner.go:130] > # 	"KILL",
	I0314 00:22:49.489540   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489555   39054 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0314 00:22:49.489570   39054 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0314 00:22:49.489581   39054 command_runner.go:130] > # add_inheritable_capabilities = false
	I0314 00:22:49.489593   39054 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0314 00:22:49.489604   39054 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 00:22:49.489611   39054 command_runner.go:130] > # default_sysctls = [
	I0314 00:22:49.489615   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489622   39054 command_runner.go:130] > # List of devices on the host that a
	I0314 00:22:49.489639   39054 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0314 00:22:49.489657   39054 command_runner.go:130] > # allowed_devices = [
	I0314 00:22:49.489663   39054 command_runner.go:130] > # 	"/dev/fuse",
	I0314 00:22:49.489668   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489676   39054 command_runner.go:130] > # List of additional devices. specified as
	I0314 00:22:49.489687   39054 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0314 00:22:49.489696   39054 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0314 00:22:49.489703   39054 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 00:22:49.489716   39054 command_runner.go:130] > # additional_devices = [
	I0314 00:22:49.489725   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489736   39054 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0314 00:22:49.489751   39054 command_runner.go:130] > # cdi_spec_dirs = [
	I0314 00:22:49.489760   39054 command_runner.go:130] > # 	"/etc/cdi",
	I0314 00:22:49.489766   39054 command_runner.go:130] > # 	"/var/run/cdi",
	I0314 00:22:49.489775   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489788   39054 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0314 00:22:49.489801   39054 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0314 00:22:49.489811   39054 command_runner.go:130] > # Defaults to false.
	I0314 00:22:49.489823   39054 command_runner.go:130] > # device_ownership_from_security_context = false
	I0314 00:22:49.489836   39054 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0314 00:22:49.489850   39054 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0314 00:22:49.489859   39054 command_runner.go:130] > # hooks_dir = [
	I0314 00:22:49.489868   39054 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0314 00:22:49.489873   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489882   39054 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0314 00:22:49.489895   39054 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0314 00:22:49.489907   39054 command_runner.go:130] > # its default mounts from the following two files:
	I0314 00:22:49.489915   39054 command_runner.go:130] > #
	I0314 00:22:49.489926   39054 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0314 00:22:49.489939   39054 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0314 00:22:49.489951   39054 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0314 00:22:49.489957   39054 command_runner.go:130] > #
	I0314 00:22:49.489964   39054 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0314 00:22:49.489976   39054 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0314 00:22:49.489994   39054 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0314 00:22:49.490005   39054 command_runner.go:130] > #      only add mounts it finds in this file.
	I0314 00:22:49.490014   39054 command_runner.go:130] > #
	I0314 00:22:49.490021   39054 command_runner.go:130] > # default_mounts_file = ""
	I0314 00:22:49.490029   39054 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0314 00:22:49.490040   39054 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0314 00:22:49.490046   39054 command_runner.go:130] > pids_limit = 1024
	I0314 00:22:49.490056   39054 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0314 00:22:49.490069   39054 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0314 00:22:49.490082   39054 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0314 00:22:49.490097   39054 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0314 00:22:49.490107   39054 command_runner.go:130] > # log_size_max = -1
	I0314 00:22:49.490120   39054 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0314 00:22:49.490130   39054 command_runner.go:130] > # log_to_journald = false
	I0314 00:22:49.490142   39054 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0314 00:22:49.490153   39054 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0314 00:22:49.490164   39054 command_runner.go:130] > # Path to directory for container attach sockets.
	I0314 00:22:49.490181   39054 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0314 00:22:49.490192   39054 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0314 00:22:49.490202   39054 command_runner.go:130] > # bind_mount_prefix = ""
	I0314 00:22:49.490214   39054 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0314 00:22:49.490221   39054 command_runner.go:130] > # read_only = false
	I0314 00:22:49.490228   39054 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0314 00:22:49.490241   39054 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0314 00:22:49.490251   39054 command_runner.go:130] > # live configuration reload.
	I0314 00:22:49.490258   39054 command_runner.go:130] > # log_level = "info"
	I0314 00:22:49.490270   39054 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0314 00:22:49.490281   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.490289   39054 command_runner.go:130] > # log_filter = ""
	I0314 00:22:49.490301   39054 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0314 00:22:49.490312   39054 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0314 00:22:49.490321   39054 command_runner.go:130] > # separated by comma.
	I0314 00:22:49.490336   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490346   39054 command_runner.go:130] > # uid_mappings = ""
	I0314 00:22:49.490356   39054 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0314 00:22:49.490368   39054 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0314 00:22:49.490378   39054 command_runner.go:130] > # separated by comma.
	I0314 00:22:49.490391   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490398   39054 command_runner.go:130] > # gid_mappings = ""
	I0314 00:22:49.490408   39054 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0314 00:22:49.490422   39054 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 00:22:49.490434   39054 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 00:22:49.490452   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490463   39054 command_runner.go:130] > # minimum_mappable_uid = -1
	I0314 00:22:49.490475   39054 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0314 00:22:49.490483   39054 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 00:22:49.490495   39054 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 00:22:49.490511   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490521   39054 command_runner.go:130] > # minimum_mappable_gid = -1
	I0314 00:22:49.490533   39054 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0314 00:22:49.490548   39054 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0314 00:22:49.490559   39054 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0314 00:22:49.490567   39054 command_runner.go:130] > # ctr_stop_timeout = 30
	I0314 00:22:49.490579   39054 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0314 00:22:49.490592   39054 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0314 00:22:49.490603   39054 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0314 00:22:49.490615   39054 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0314 00:22:49.490621   39054 command_runner.go:130] > drop_infra_ctr = false
	I0314 00:22:49.490634   39054 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0314 00:22:49.490650   39054 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0314 00:22:49.490659   39054 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0314 00:22:49.490668   39054 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0314 00:22:49.490683   39054 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0314 00:22:49.490696   39054 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0314 00:22:49.490708   39054 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0314 00:22:49.490719   39054 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0314 00:22:49.490727   39054 command_runner.go:130] > # shared_cpuset = ""
	I0314 00:22:49.490738   39054 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0314 00:22:49.490745   39054 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0314 00:22:49.490751   39054 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0314 00:22:49.490779   39054 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0314 00:22:49.490789   39054 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0314 00:22:49.490798   39054 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0314 00:22:49.490808   39054 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0314 00:22:49.490818   39054 command_runner.go:130] > # enable_criu_support = false
	I0314 00:22:49.490826   39054 command_runner.go:130] > # Enable/disable the generation of the container,
	I0314 00:22:49.490836   39054 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0314 00:22:49.490843   39054 command_runner.go:130] > # enable_pod_events = false
	I0314 00:22:49.490853   39054 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 00:22:49.490867   39054 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 00:22:49.490878   39054 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0314 00:22:49.490888   39054 command_runner.go:130] > # default_runtime = "runc"
	I0314 00:22:49.490896   39054 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0314 00:22:49.490909   39054 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0314 00:22:49.490924   39054 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0314 00:22:49.490939   39054 command_runner.go:130] > # creation as a file is not desired either.
	I0314 00:22:49.490955   39054 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0314 00:22:49.490966   39054 command_runner.go:130] > # the hostname is being managed dynamically.
	I0314 00:22:49.490973   39054 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0314 00:22:49.490983   39054 command_runner.go:130] > # ]
	I0314 00:22:49.490999   39054 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0314 00:22:49.491007   39054 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0314 00:22:49.491014   39054 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0314 00:22:49.491021   39054 command_runner.go:130] > # Each entry in the table should follow the format:
	I0314 00:22:49.491025   39054 command_runner.go:130] > #
	I0314 00:22:49.491033   39054 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0314 00:22:49.491041   39054 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0314 00:22:49.491049   39054 command_runner.go:130] > # runtime_type = "oci"
	I0314 00:22:49.491127   39054 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0314 00:22:49.491142   39054 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0314 00:22:49.491149   39054 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0314 00:22:49.491160   39054 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0314 00:22:49.491169   39054 command_runner.go:130] > # monitor_env = []
	I0314 00:22:49.491180   39054 command_runner.go:130] > # privileged_without_host_devices = false
	I0314 00:22:49.491189   39054 command_runner.go:130] > # allowed_annotations = []
	I0314 00:22:49.491198   39054 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0314 00:22:49.491207   39054 command_runner.go:130] > # Where:
	I0314 00:22:49.491219   39054 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0314 00:22:49.491229   39054 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0314 00:22:49.491241   39054 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0314 00:22:49.491252   39054 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0314 00:22:49.491261   39054 command_runner.go:130] > #   in $PATH.
	I0314 00:22:49.491273   39054 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0314 00:22:49.491281   39054 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0314 00:22:49.491290   39054 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0314 00:22:49.491300   39054 command_runner.go:130] > #   state.
	I0314 00:22:49.491313   39054 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0314 00:22:49.491325   39054 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0314 00:22:49.491337   39054 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0314 00:22:49.491348   39054 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0314 00:22:49.491359   39054 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0314 00:22:49.491369   39054 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0314 00:22:49.491395   39054 command_runner.go:130] > #   The currently recognized values are:
	I0314 00:22:49.491411   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0314 00:22:49.491425   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0314 00:22:49.491443   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0314 00:22:49.491452   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0314 00:22:49.491466   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0314 00:22:49.491481   39054 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0314 00:22:49.491491   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0314 00:22:49.491505   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0314 00:22:49.491514   39054 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0314 00:22:49.491523   39054 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0314 00:22:49.491531   39054 command_runner.go:130] > #   deprecated option "conmon".
	I0314 00:22:49.491540   39054 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0314 00:22:49.491551   39054 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0314 00:22:49.491566   39054 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0314 00:22:49.491577   39054 command_runner.go:130] > #   should be moved to the container's cgroup
	I0314 00:22:49.491590   39054 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0314 00:22:49.491601   39054 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0314 00:22:49.491614   39054 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0314 00:22:49.491622   39054 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0314 00:22:49.491629   39054 command_runner.go:130] > #
	I0314 00:22:49.491637   39054 command_runner.go:130] > # Using the seccomp notifier feature:
	I0314 00:22:49.491649   39054 command_runner.go:130] > #
	I0314 00:22:49.491666   39054 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0314 00:22:49.491679   39054 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0314 00:22:49.491687   39054 command_runner.go:130] > #
	I0314 00:22:49.491699   39054 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0314 00:22:49.491708   39054 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0314 00:22:49.491714   39054 command_runner.go:130] > #
	I0314 00:22:49.491724   39054 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0314 00:22:49.491733   39054 command_runner.go:130] > # feature.
	I0314 00:22:49.491741   39054 command_runner.go:130] > #
	I0314 00:22:49.491750   39054 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0314 00:22:49.491762   39054 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0314 00:22:49.491775   39054 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0314 00:22:49.491787   39054 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0314 00:22:49.491799   39054 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0314 00:22:49.491807   39054 command_runner.go:130] > #
	I0314 00:22:49.491817   39054 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0314 00:22:49.491836   39054 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0314 00:22:49.491845   39054 command_runner.go:130] > #
	I0314 00:22:49.491854   39054 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0314 00:22:49.491876   39054 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0314 00:22:49.491883   39054 command_runner.go:130] > #
	I0314 00:22:49.491890   39054 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0314 00:22:49.491903   39054 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0314 00:22:49.491913   39054 command_runner.go:130] > # limitation.
	I0314 00:22:49.491923   39054 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0314 00:22:49.491935   39054 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0314 00:22:49.491940   39054 command_runner.go:130] > runtime_type = "oci"
	I0314 00:22:49.491950   39054 command_runner.go:130] > runtime_root = "/run/runc"
	I0314 00:22:49.491959   39054 command_runner.go:130] > runtime_config_path = ""
	I0314 00:22:49.491968   39054 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0314 00:22:49.491972   39054 command_runner.go:130] > monitor_cgroup = "pod"
	I0314 00:22:49.491981   39054 command_runner.go:130] > monitor_exec_cgroup = ""
	I0314 00:22:49.491991   39054 command_runner.go:130] > monitor_env = [
	I0314 00:22:49.492005   39054 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 00:22:49.492012   39054 command_runner.go:130] > ]
	I0314 00:22:49.492020   39054 command_runner.go:130] > privileged_without_host_devices = false
	I0314 00:22:49.492032   39054 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0314 00:22:49.492043   39054 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0314 00:22:49.492055   39054 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0314 00:22:49.492063   39054 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0314 00:22:49.492078   39054 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0314 00:22:49.492092   39054 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0314 00:22:49.492106   39054 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0314 00:22:49.492118   39054 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0314 00:22:49.492127   39054 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0314 00:22:49.492141   39054 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0314 00:22:49.492147   39054 command_runner.go:130] > # Example:
	I0314 00:22:49.492153   39054 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0314 00:22:49.492164   39054 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0314 00:22:49.492175   39054 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0314 00:22:49.492186   39054 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0314 00:22:49.492194   39054 command_runner.go:130] > # cpuset = 0
	I0314 00:22:49.492208   39054 command_runner.go:130] > # cpushares = "0-1"
	I0314 00:22:49.492213   39054 command_runner.go:130] > # Where:
	I0314 00:22:49.492220   39054 command_runner.go:130] > # The workload name is workload-type.
	I0314 00:22:49.492229   39054 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0314 00:22:49.492234   39054 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0314 00:22:49.492239   39054 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0314 00:22:49.492246   39054 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0314 00:22:49.492253   39054 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0314 00:22:49.492260   39054 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0314 00:22:49.492269   39054 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0314 00:22:49.492275   39054 command_runner.go:130] > # Default value is set to true
	I0314 00:22:49.492282   39054 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0314 00:22:49.492291   39054 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0314 00:22:49.492298   39054 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0314 00:22:49.492305   39054 command_runner.go:130] > # Default value is set to 'false'
	I0314 00:22:49.492312   39054 command_runner.go:130] > # disable_hostport_mapping = false
	I0314 00:22:49.492322   39054 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0314 00:22:49.492327   39054 command_runner.go:130] > #
	I0314 00:22:49.492335   39054 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0314 00:22:49.492343   39054 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0314 00:22:49.492351   39054 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0314 00:22:49.492357   39054 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0314 00:22:49.492362   39054 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0314 00:22:49.492365   39054 command_runner.go:130] > [crio.image]
	I0314 00:22:49.492371   39054 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0314 00:22:49.492375   39054 command_runner.go:130] > # default_transport = "docker://"
	I0314 00:22:49.492380   39054 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0314 00:22:49.492386   39054 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0314 00:22:49.492393   39054 command_runner.go:130] > # global_auth_file = ""
	I0314 00:22:49.492397   39054 command_runner.go:130] > # The image used to instantiate infra containers.
	I0314 00:22:49.492405   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.492409   39054 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0314 00:22:49.492425   39054 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0314 00:22:49.492438   39054 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0314 00:22:49.492450   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.492460   39054 command_runner.go:130] > # pause_image_auth_file = ""
	I0314 00:22:49.492480   39054 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0314 00:22:49.492493   39054 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0314 00:22:49.492502   39054 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0314 00:22:49.492507   39054 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0314 00:22:49.492513   39054 command_runner.go:130] > # pause_command = "/pause"
	I0314 00:22:49.492519   39054 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0314 00:22:49.492527   39054 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0314 00:22:49.492532   39054 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0314 00:22:49.492540   39054 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0314 00:22:49.492548   39054 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0314 00:22:49.492556   39054 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0314 00:22:49.492562   39054 command_runner.go:130] > # pinned_images = [
	I0314 00:22:49.492565   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492571   39054 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0314 00:22:49.492577   39054 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0314 00:22:49.492585   39054 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0314 00:22:49.492591   39054 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0314 00:22:49.492599   39054 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0314 00:22:49.492603   39054 command_runner.go:130] > # signature_policy = ""
	I0314 00:22:49.492609   39054 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0314 00:22:49.492616   39054 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0314 00:22:49.492624   39054 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0314 00:22:49.492629   39054 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0314 00:22:49.492637   39054 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0314 00:22:49.492641   39054 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0314 00:22:49.492657   39054 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0314 00:22:49.492671   39054 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0314 00:22:49.492678   39054 command_runner.go:130] > # changing them here.
	I0314 00:22:49.492682   39054 command_runner.go:130] > # insecure_registries = [
	I0314 00:22:49.492688   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492694   39054 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0314 00:22:49.492700   39054 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0314 00:22:49.492705   39054 command_runner.go:130] > # image_volumes = "mkdir"
	I0314 00:22:49.492711   39054 command_runner.go:130] > # Temporary directory to use for storing big files
	I0314 00:22:49.492715   39054 command_runner.go:130] > # big_files_temporary_dir = ""
	I0314 00:22:49.492721   39054 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0314 00:22:49.492734   39054 command_runner.go:130] > # CNI plugins.
	I0314 00:22:49.492740   39054 command_runner.go:130] > [crio.network]
	I0314 00:22:49.492746   39054 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0314 00:22:49.492753   39054 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0314 00:22:49.492757   39054 command_runner.go:130] > # cni_default_network = ""
	I0314 00:22:49.492765   39054 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0314 00:22:49.492770   39054 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0314 00:22:49.492775   39054 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0314 00:22:49.492781   39054 command_runner.go:130] > # plugin_dirs = [
	I0314 00:22:49.492785   39054 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0314 00:22:49.492792   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492798   39054 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0314 00:22:49.492804   39054 command_runner.go:130] > [crio.metrics]
	I0314 00:22:49.492809   39054 command_runner.go:130] > # Globally enable or disable metrics support.
	I0314 00:22:49.492815   39054 command_runner.go:130] > enable_metrics = true
	I0314 00:22:49.492819   39054 command_runner.go:130] > # Specify enabled metrics collectors.
	I0314 00:22:49.492825   39054 command_runner.go:130] > # Per default all metrics are enabled.
	I0314 00:22:49.492833   39054 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0314 00:22:49.492841   39054 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0314 00:22:49.492849   39054 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0314 00:22:49.492855   39054 command_runner.go:130] > # metrics_collectors = [
	I0314 00:22:49.492859   39054 command_runner.go:130] > # 	"operations",
	I0314 00:22:49.492866   39054 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0314 00:22:49.492871   39054 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0314 00:22:49.492877   39054 command_runner.go:130] > # 	"operations_errors",
	I0314 00:22:49.492881   39054 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0314 00:22:49.492884   39054 command_runner.go:130] > # 	"image_pulls_by_name",
	I0314 00:22:49.492891   39054 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0314 00:22:49.492895   39054 command_runner.go:130] > # 	"image_pulls_failures",
	I0314 00:22:49.492901   39054 command_runner.go:130] > # 	"image_pulls_successes",
	I0314 00:22:49.492905   39054 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0314 00:22:49.492911   39054 command_runner.go:130] > # 	"image_layer_reuse",
	I0314 00:22:49.492916   39054 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0314 00:22:49.492922   39054 command_runner.go:130] > # 	"containers_oom_total",
	I0314 00:22:49.492925   39054 command_runner.go:130] > # 	"containers_oom",
	I0314 00:22:49.492931   39054 command_runner.go:130] > # 	"processes_defunct",
	I0314 00:22:49.492939   39054 command_runner.go:130] > # 	"operations_total",
	I0314 00:22:49.492946   39054 command_runner.go:130] > # 	"operations_latency_seconds",
	I0314 00:22:49.492950   39054 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0314 00:22:49.492957   39054 command_runner.go:130] > # 	"operations_errors_total",
	I0314 00:22:49.492960   39054 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0314 00:22:49.492967   39054 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0314 00:22:49.492971   39054 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0314 00:22:49.492975   39054 command_runner.go:130] > # 	"image_pulls_success_total",
	I0314 00:22:49.492980   39054 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0314 00:22:49.492984   39054 command_runner.go:130] > # 	"containers_oom_count_total",
	I0314 00:22:49.492992   39054 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0314 00:22:49.492996   39054 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0314 00:22:49.492999   39054 command_runner.go:130] > # ]
	I0314 00:22:49.493004   39054 command_runner.go:130] > # The port on which the metrics server will listen.
	I0314 00:22:49.493009   39054 command_runner.go:130] > # metrics_port = 9090
	I0314 00:22:49.493013   39054 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0314 00:22:49.493019   39054 command_runner.go:130] > # metrics_socket = ""
	I0314 00:22:49.493024   39054 command_runner.go:130] > # The certificate for the secure metrics server.
	I0314 00:22:49.493031   39054 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0314 00:22:49.493044   39054 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0314 00:22:49.493055   39054 command_runner.go:130] > # certificate on any modification event.
	I0314 00:22:49.493063   39054 command_runner.go:130] > # metrics_cert = ""
	I0314 00:22:49.493068   39054 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0314 00:22:49.493076   39054 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0314 00:22:49.493080   39054 command_runner.go:130] > # metrics_key = ""
	I0314 00:22:49.493088   39054 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0314 00:22:49.493091   39054 command_runner.go:130] > [crio.tracing]
	I0314 00:22:49.493097   39054 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0314 00:22:49.493104   39054 command_runner.go:130] > # enable_tracing = false
	I0314 00:22:49.493109   39054 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0314 00:22:49.493115   39054 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0314 00:22:49.493121   39054 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0314 00:22:49.493129   39054 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0314 00:22:49.493136   39054 command_runner.go:130] > # CRI-O NRI configuration.
	I0314 00:22:49.493139   39054 command_runner.go:130] > [crio.nri]
	I0314 00:22:49.493144   39054 command_runner.go:130] > # Globally enable or disable NRI.
	I0314 00:22:49.493154   39054 command_runner.go:130] > # enable_nri = false
	I0314 00:22:49.493161   39054 command_runner.go:130] > # NRI socket to listen on.
	I0314 00:22:49.493165   39054 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0314 00:22:49.493171   39054 command_runner.go:130] > # NRI plugin directory to use.
	I0314 00:22:49.493176   39054 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0314 00:22:49.493183   39054 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0314 00:22:49.493187   39054 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0314 00:22:49.493194   39054 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0314 00:22:49.493199   39054 command_runner.go:130] > # nri_disable_connections = false
	I0314 00:22:49.493206   39054 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0314 00:22:49.493210   39054 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0314 00:22:49.493217   39054 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0314 00:22:49.493224   39054 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0314 00:22:49.493229   39054 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0314 00:22:49.493232   39054 command_runner.go:130] > [crio.stats]
	I0314 00:22:49.493240   39054 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0314 00:22:49.493248   39054 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0314 00:22:49.493254   39054 command_runner.go:130] > # stats_collection_period = 0
	I0314 00:22:49.493416   39054 cni.go:84] Creating CNI manager for ""
	I0314 00:22:49.493431   39054 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 00:22:49.493440   39054 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:22:49.493458   39054 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-507871 NodeName:multinode-507871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:22:49.493601   39054 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-507871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:22:49.493664   39054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:22:49.504269   39054 command_runner.go:130] > kubeadm
	I0314 00:22:49.504291   39054 command_runner.go:130] > kubectl
	I0314 00:22:49.504295   39054 command_runner.go:130] > kubelet
	I0314 00:22:49.504314   39054 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:22:49.504365   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:22:49.514811   39054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0314 00:22:49.532955   39054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:22:49.550989   39054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0314 00:22:49.569379   39054 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0314 00:22:49.573256   39054 command_runner.go:130] > 192.168.39.60	control-plane.minikube.internal
	I0314 00:22:49.573471   39054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:22:49.716887   39054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:22:49.734187   39054 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871 for IP: 192.168.39.60
	I0314 00:22:49.734217   39054 certs.go:194] generating shared ca certs ...
	I0314 00:22:49.734238   39054 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:22:49.734439   39054 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:22:49.734509   39054 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:22:49.734521   39054 certs.go:256] generating profile certs ...
	I0314 00:22:49.734604   39054 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/client.key
	I0314 00:22:49.734661   39054 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key.3aa17428
	I0314 00:22:49.734694   39054 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key
	I0314 00:22:49.734704   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 00:22:49.734715   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 00:22:49.734730   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 00:22:49.734740   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 00:22:49.734758   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 00:22:49.734795   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 00:22:49.734812   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 00:22:49.734822   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 00:22:49.734868   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:22:49.734903   39054 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:22:49.734912   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:22:49.734940   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:22:49.734961   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:22:49.734983   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:22:49.735018   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:22:49.735049   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0314 00:22:49.735062   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:49.735074   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0314 00:22:49.735647   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:22:49.763664   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:22:49.789319   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:22:49.815846   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:22:49.842525   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:22:49.869050   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:22:49.894409   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:22:49.920905   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:22:49.946665   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:22:49.971541   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:22:49.997240   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:22:50.023614   39054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:22:50.041138   39054 ssh_runner.go:195] Run: openssl version
	I0314 00:22:50.046923   39054 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 00:22:50.047188   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:22:50.058420   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063192   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063218   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063260   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.069297   39054 command_runner.go:130] > 3ec20f2e
	I0314 00:22:50.069395   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:22:50.079812   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:22:50.092032   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096821   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096918   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096980   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.103212   39054 command_runner.go:130] > b5213941
	I0314 00:22:50.103409   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:22:50.113576   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:22:50.124948   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129861   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129888   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129922   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.135935   39054 command_runner.go:130] > 51391683
	I0314 00:22:50.135989   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:22:50.145774   39054 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:22:50.150303   39054 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:22:50.150335   39054 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 00:22:50.150344   39054 command_runner.go:130] > Device: 253,1	Inode: 7338557     Links: 1
	I0314 00:22:50.150356   39054 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 00:22:50.150364   39054 command_runner.go:130] > Access: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150371   39054 command_runner.go:130] > Modify: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150376   39054 command_runner.go:130] > Change: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150383   39054 command_runner.go:130] >  Birth: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150434   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:22:50.156271   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.156335   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:22:50.162013   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.162073   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:22:50.167858   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.167957   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:22:50.173473   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.173544   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:22:50.179104   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.179157   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:22:50.184895   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.184970   39054 kubeadm.go:391] StartCluster: {Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:22:50.185074   39054 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:22:50.185111   39054 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:22:50.223815   39054 command_runner.go:130] > 0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530
	I0314 00:22:50.223839   39054 command_runner.go:130] > d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d
	I0314 00:22:50.223848   39054 command_runner.go:130] > 9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c
	I0314 00:22:50.223856   39054 command_runner.go:130] > 43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd
	I0314 00:22:50.223862   39054 command_runner.go:130] > 132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a
	I0314 00:22:50.223870   39054 command_runner.go:130] > 97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d
	I0314 00:22:50.223878   39054 command_runner.go:130] > 6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54
	I0314 00:22:50.223904   39054 command_runner.go:130] > ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6
	I0314 00:22:50.223933   39054 cri.go:89] found id: "0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530"
	I0314 00:22:50.223944   39054 cri.go:89] found id: "d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d"
	I0314 00:22:50.223950   39054 cri.go:89] found id: "9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c"
	I0314 00:22:50.223956   39054 cri.go:89] found id: "43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd"
	I0314 00:22:50.223961   39054 cri.go:89] found id: "132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a"
	I0314 00:22:50.223975   39054 cri.go:89] found id: "97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d"
	I0314 00:22:50.223983   39054 cri.go:89] found id: "6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54"
	I0314 00:22:50.223988   39054 cri.go:89] found id: "ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6"
	I0314 00:22:50.223996   39054 cri.go:89] found id: ""
	I0314 00:22:50.224060   39054 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.648065929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710375859648041284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25668373-b71c-49d3-a973-e39734f02766 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.648684970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c5f1a7b-027c-46b4-b9c2-c14425cd6784 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.648764422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c5f1a7b-027c-46b4-b9c2-c14425cd6784 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.649131016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c5f1a7b-027c-46b4-b9c2-c14425cd6784 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.692176343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c54bacbf-054a-4267-8769-ee46ef533d8b name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.692277476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c54bacbf-054a-4267-8769-ee46ef533d8b name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.693748326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26145281-87f5-418c-955b-c547290955b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.694185701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710375859694160629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26145281-87f5-418c-955b-c547290955b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.695120985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f30025-737b-426a-9f28-3cf3f06e820b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.695200837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f30025-737b-426a-9f28-3cf3f06e820b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.696447346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7f30025-737b-426a-9f28-3cf3f06e820b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.741427995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3033b5dd-4cf0-4ee0-a360-288782fa7247 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.741709836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3033b5dd-4cf0-4ee0-a360-288782fa7247 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.748701181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e669271-ea22-4465-964d-d3ad7a63e077 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.749143827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710375859749118196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e669271-ea22-4465-964d-d3ad7a63e077 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.749955660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f19a444d-edb6-454d-b9a3-6ec3e66e6461 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.750018564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f19a444d-edb6-454d-b9a3-6ec3e66e6461 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.750371791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f19a444d-edb6-454d-b9a3-6ec3e66e6461 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.798208063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d589821e-6fbe-4535-99e8-4fa313938a1e name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.798281859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d589821e-6fbe-4535-99e8-4fa313938a1e name=/runtime.v1.RuntimeService/Version
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.799496795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83fe7e37-54c2-41ec-96d8-47629873dc37 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.800367625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710375859800340413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83fe7e37-54c2-41ec-96d8-47629873dc37 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.801158175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7652693b-1eaf-4ee4-b59f-bf651c7b654e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.801233166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7652693b-1eaf-4ee4-b59f-bf651c7b654e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:24:19 multinode-507871 crio[2844]: time="2024-03-14 00:24:19.801683231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7652693b-1eaf-4ee4-b59f-bf651c7b654e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c0e31ec78a89c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      55 seconds ago       Running             busybox                   1                   6a7e85e12aeea       busybox-5b5d89c9d6-vrskm
	60b8bdb869593       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   c53c270508e37       kube-proxy-vlzf2
	94f3416cc9a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   bf18cc0318910       storage-provisioner
	1d9b9ad83c74d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   b660159f1e6ba       coredns-5dd5756b68-9vlnk
	e283bf2d8cdb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   4b576567650ab       kindnet-4lwzg
	196ecd411466b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   be969458a98b8       etcd-multinode-507871
	e95d69e88eba0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   b800f24e3af22       kube-scheduler-multinode-507871
	c2a96cc8a747b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   85e1d34ad57fd       kube-apiserver-multinode-507871
	b50238b896e9f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   7446a14c719bf       kube-controller-manager-multinode-507871
	23110e04e7259       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   9cce3068be90d       busybox-5b5d89c9d6-vrskm
	0a403f7ce3b87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   a6027e6899044       storage-provisioner
	d7ab912de31f7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   b2755b1eac52d       coredns-5dd5756b68-9vlnk
	9c102f868585b       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   8c116aa0b59e7       kindnet-4lwzg
	43cac9d56e995       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   2af054c67bb56       kube-proxy-vlzf2
	132b3767fdc0f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   c239bd48e2013       kube-scheduler-multinode-507871
	97f09dd3764f0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   c2de6818ded13       kube-apiserver-multinode-507871
	6318489143ee0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   8548b21c08449       etcd-multinode-507871
	ababf7bade675       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   181b46e7a6ab1       kube-controller-manager-multinode-507871
	
	
	==> coredns [1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42876 - 49648 "HINFO IN 3006261507411900213.5578216792479905689. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015401558s
	
	
	==> coredns [d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d] <==
	[INFO] 10.244.0.3:37575 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002024866s
	[INFO] 10.244.0.3:36577 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082358s
	[INFO] 10.244.0.3:47842 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084657s
	[INFO] 10.244.0.3:51068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001376723s
	[INFO] 10.244.0.3:56425 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043467s
	[INFO] 10.244.0.3:37459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084063s
	[INFO] 10.244.0.3:47995 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041054s
	[INFO] 10.244.1.2:45681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140092s
	[INFO] 10.244.1.2:45576 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113922s
	[INFO] 10.244.1.2:46653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009706s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074005s
	[INFO] 10.244.0.3:41435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127009s
	[INFO] 10.244.0.3:38580 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090375s
	[INFO] 10.244.0.3:46807 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012191s
	[INFO] 10.244.0.3:45912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005798s
	[INFO] 10.244.1.2:48361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138669s
	[INFO] 10.244.1.2:49087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014347s
	[INFO] 10.244.1.2:49674 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000219815s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213717s
	[INFO] 10.244.0.3:46614 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083328s
	[INFO] 10.244.0.3:52240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048161s
	[INFO] 10.244.0.3:52273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049137s
	[INFO] 10.244.0.3:45527 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042531s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-507871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-507871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=multinode-507871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_16_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-507871
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-507871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8dcd38c23ae04b89b9efc07e56cd47fa
	  System UUID:                8dcd38c2-3ae0-4b89-b9ef-c07e56cd47fa
	  Boot ID:                    0ae90c06-f75b-4c13-8ad2-654634eab994
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-vrskm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	  kube-system                 coredns-5dd5756b68-9vlnk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m30s
	  kube-system                 etcd-multinode-507871                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m42s
	  kube-system                 kindnet-4lwzg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-apiserver-multinode-507871             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-controller-manager-multinode-507871    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-proxy-vlzf2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-scheduler-multinode-507871             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  Starting                 81s                    kube-proxy       
	  Normal  Starting                 7m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m50s (x4 over 7m50s)  kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x4 over 7m50s)  kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x3 over 7m50s)  kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m43s                  kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m43s                  kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s                  kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m31s                  node-controller  Node multinode-507871 event: Registered Node multinode-507871 in Controller
	  Normal  NodeReady                7m25s                  kubelet          Node multinode-507871 status is now: NodeReady
	  Normal  Starting                 88s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)      kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)      kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)      kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           71s                    node-controller  Node multinode-507871 event: Registered Node multinode-507871 in Controller
	
	
	Name:               multinode-507871-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-507871-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=multinode-507871
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T00_23_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:23:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-507871-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:24:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:23:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:23:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:23:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:23:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-507871-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9694808feb3441d0b5592744303f7626
	  System UUID:                9694808f-eb34-41d0-b559-2744303f7626
	  Boot ID:                    ea3d3811-6406-4c53-aa7b-d3b5cee45955
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-6624j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-jzhqr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m51s
	  kube-system                 kube-proxy-lpvtz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m46s                  kube-proxy       
	  Normal  Starting                 37s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m51s (x5 over 6m53s)  kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s (x5 over 6m53s)  kubelet          Node multinode-507871-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m51s (x5 over 6m53s)  kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m42s                  kubelet          Node multinode-507871-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x5 over 42s)      kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 42s)      kubelet          Node multinode-507871-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 42s)      kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                    node-controller  Node multinode-507871-m02 event: Registered Node multinode-507871-m02 in Controller
	  Normal  NodeReady                33s                    kubelet          Node multinode-507871-m02 status is now: NodeReady
	
	
	Name:               multinode-507871-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-507871-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=multinode-507871
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T00_24_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:24:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-507871-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:24:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:24:17 +0000   Thu, 14 Mar 2024 00:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:24:17 +0000   Thu, 14 Mar 2024 00:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:24:17 +0000   Thu, 14 Mar 2024 00:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:24:17 +0000   Thu, 14 Mar 2024 00:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    multinode-507871-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e24ff7c5d5246249549d6638f0890c6
	  System UUID:                6e24ff7c-5d52-4624-9549-d6638f0890c6
	  Boot ID:                    57fc1cff-8f37-4076-b13e-d1d92747f5b6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ffqpb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-proxy-gxf88    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  Starting                 7s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m4s (x5 over 6m6s)    kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x5 over 6m6s)    kubelet          Node multinode-507871-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x5 over 6m6s)    kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m53s                  kubelet          Node multinode-507871-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m17s (x5 over 5m18s)  kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m17s (x5 over 5m18s)  kubelet          Node multinode-507871-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m17s (x5 over 5m18s)  kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m9s                   kubelet          Node multinode-507871-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s (x5 over 13s)      kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 13s)      kubelet          Node multinode-507871-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 13s)      kubelet          Node multinode-507871-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                     node-controller  Node multinode-507871-m03 event: Registered Node multinode-507871-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-507871-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[ +11.519653] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.139947] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198635] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.117799] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.239925] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.811738] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.063301] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.711288] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +1.292204] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.982925] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[ +12.781995] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +0.097051] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.039279] kauditd_printk_skb: 56 callbacks suppressed
	[Mar14 00:17] kauditd_printk_skb: 16 callbacks suppressed
	[Mar14 00:22] systemd-fstab-generator[2765]: Ignoring "noauto" option for root device
	[  +0.151000] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.197757] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.151293] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.250614] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.772094] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +2.661608] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +1.079584] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.307776] kauditd_printk_skb: 20 callbacks suppressed
	[Mar14 00:23] systemd-fstab-generator[3888]: Ignoring "noauto" option for root device
	[ +11.474700] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed] <==
	{"level":"info","ts":"2024-03-14T00:22:53.91347Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:22:53.913498Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:22:53.914032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250)"}
	{"level":"info","ts":"2024-03-14T00:22:53.918675Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","added-peer-id":"1a622f206f99396a","added-peer-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2024-03-14T00:22:53.91905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:22:53.919109Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:22:53.950863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:22:53.951002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:22:53.951151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:22:53.952291Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T00:22:53.952225Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1a622f206f99396a","initial-advertise-peer-urls":["https://192.168.39.60:2380"],"listen-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:22:55.24465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.250482Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:multinode-507871 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:22:55.250883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:22:55.250899Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:22:55.251015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T00:22:55.251116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:22:55.252366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:22:55.252485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.60:2379"}
	
	
	==> etcd [6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54] <==
	WARNING: 2024/03/14 00:16:37 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-14T00:18:18.830082Z","caller":"traceutil/trace.go:171","msg":"trace[968168768] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:633; }","duration":"130.171957ms","start":"2024-03-14T00:18:18.699876Z","end":"2024-03-14T00:18:18.830048Z","steps":["trace[968168768] 'read index received'  (duration: 130.030273ms)","trace[968168768] 'applied index is now lower than readState.Index'  (duration: 141.044µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:18:18.830638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.645101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:18.830713Z","caller":"traceutil/trace.go:171","msg":"trace[33776748] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:603; }","duration":"130.876422ms","start":"2024-03-14T00:18:18.69983Z","end":"2024-03-14T00:18:18.830706Z","steps":["trace[33776748] 'agreement among raft nodes before linearized reading'  (duration: 130.623497ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T00:18:18.83042Z","caller":"traceutil/trace.go:171","msg":"trace[826385628] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"155.236177ms","start":"2024-03-14T00:18:18.675165Z","end":"2024-03-14T00:18:18.830402Z","steps":["trace[826385628] 'process raft request'  (duration: 154.732215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.062011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.073154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-gxf88\" ","response":"range_response_count:1 size:3440"}
	{"level":"info","ts":"2024-03-14T00:18:19.062212Z","caller":"traceutil/trace.go:171","msg":"trace[969139056] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-gxf88; range_end:; response_count:1; response_revision:603; }","duration":"223.304991ms","start":"2024-03-14T00:18:18.838886Z","end":"2024-03-14T00:18:19.062191Z","steps":["trace[969139056] 'range keys from in-memory index tree'  (duration: 222.827636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.06205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.298451ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:19.062505Z","caller":"traceutil/trace.go:171","msg":"trace[744042170] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:603; }","duration":"221.759265ms","start":"2024-03-14T00:18:18.840732Z","end":"2024-03-14T00:18:19.062492Z","steps":["trace[744042170] 'range keys from in-memory index tree'  (duration: 221.275267ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T00:18:19.288467Z","caller":"traceutil/trace.go:171","msg":"trace[778292276] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:634; }","duration":"212.929453ms","start":"2024-03-14T00:18:19.075519Z","end":"2024-03-14T00:18:19.288449Z","steps":["trace[778292276] 'read index received'  (duration: 212.718203ms)","trace[778292276] 'applied index is now lower than readState.Index'  (duration: 210.531µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:18:19.288756Z","caller":"traceutil/trace.go:171","msg":"trace[1946803402] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"215.921159ms","start":"2024-03-14T00:18:19.07282Z","end":"2024-03-14T00:18:19.288741Z","steps":["trace[1946803402] 'process raft request'  (duration: 215.475482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.289062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.534264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:19.289146Z","caller":"traceutil/trace.go:171","msg":"trace[1611085147] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:604; }","duration":"213.634544ms","start":"2024-03-14T00:18:19.075495Z","end":"2024-03-14T00:18:19.289129Z","steps":["trace[1611085147] 'agreement among raft nodes before linearized reading'  (duration: 213.416044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.553441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.340397ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4137275588816839247 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:602 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:18:19.553643Z","caller":"traceutil/trace.go:171","msg":"trace[765541066] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"253.331405ms","start":"2024-03-14T00:18:19.3003Z","end":"2024-03-14T00:18:19.553631Z","steps":["trace[765541066] 'process raft request'  (duration: 83.019346ms)","trace[765541066] 'compare'  (duration: 169.141915ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:21:16.688792Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T00:21:16.688999Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-507871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	{"level":"warn","ts":"2024-03-14T00:21:16.689196Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.68928Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.762207Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.762309Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T00:21:16.762359Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1a622f206f99396a","current-leader-member-id":"1a622f206f99396a"}
	{"level":"info","ts":"2024-03-14T00:21:16.765223Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:21:16.765467Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:21:16.765634Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-507871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	
	
	==> kernel <==
	 00:24:20 up 8 min,  0 users,  load average: 0.27, 0.29, 0.15
	Linux multinode-507871 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c] <==
	I0314 00:20:35.909696       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:20:45.916438       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:20:45.916491       1 main.go:227] handling current node
	I0314 00:20:45.916502       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:20:45.916508       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:20:45.916682       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:20:45.916690       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:20:55.922906       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:20:55.922951       1 main.go:227] handling current node
	I0314 00:20:55.922976       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:20:55.922982       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:20:55.923114       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:20:55.923143       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:21:05.937775       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:21:05.937908       1 main.go:227] handling current node
	I0314 00:21:05.937938       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:21:05.937965       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:21:05.938101       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:21:05.938123       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:21:15.951443       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:21:15.951471       1 main.go:227] handling current node
	I0314 00:21:15.951494       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:21:15.951499       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:21:15.951802       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:21:15.951814       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d] <==
	I0314 00:23:38.634500       1 main.go:227] handling current node
	I0314 00:23:38.634511       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:23:38.634516       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:23:48.648205       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:23:48.648289       1 main.go:227] handling current node
	I0314 00:23:48.648312       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:23:48.648329       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:23:48.648472       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:23:48.648493       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:23:58.657770       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:23:58.657908       1 main.go:227] handling current node
	I0314 00:23:58.657931       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:23:58.657949       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:23:58.658087       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:23:58.658107       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:24:08.670774       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:24:08.670980       1 main.go:227] handling current node
	I0314 00:24:08.671060       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:24:08.671085       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:24:18.675711       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:24:18.675818       1 main.go:227] handling current node
	I0314 00:24:18.675841       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:24:18.675863       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:24:18.675981       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:24:18.676002       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d] <==
	E0314 00:21:16.716293       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716354       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716413       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716446       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716513       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716711       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717083       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717160       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717218       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717311       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0314 00:21:16.717476       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 00:21:16.718131       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0314 00:21:16.718905       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0314 00:21:16.719076       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0314 00:21:16.719126       1 controller.go:129] Ending legacy_token_tracking_controller
	I0314 00:21:16.719157       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0314 00:21:16.719192       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0314 00:21:16.719226       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0314 00:21:16.719270       1 available_controller.go:439] Shutting down AvailableConditionController
	I0314 00:21:16.719330       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0314 00:21:16.719625       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0314 00:21:16.719668       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0314 00:21:16.720010       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0314 00:21:16.720081       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0314 00:21:16.720134       1 naming_controller.go:302] Shutting down NamingConditionController
	
	
	==> kube-apiserver [c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a] <==
	I0314 00:22:56.696815       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 00:22:56.697080       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 00:22:56.697112       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 00:22:56.725303       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 00:22:56.751035       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 00:22:56.751517       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 00:22:56.765434       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 00:22:56.751530       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 00:22:56.751737       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 00:22:56.751746       1 shared_informer.go:318] Caches are synced for configmaps
	E0314 00:22:56.776774       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0314 00:22:56.801728       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 00:22:56.801906       1 aggregator.go:166] initial CRD sync complete...
	I0314 00:22:56.801945       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 00:22:56.801952       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 00:22:56.801958       1 cache.go:39] Caches are synced for autoregister controller
	I0314 00:22:56.807290       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 00:22:57.660299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 00:22:59.152617       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 00:22:59.272718       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 00:22:59.282992       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 00:22:59.364206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 00:22:59.373798       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 00:23:09.166122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 00:23:09.174239       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6] <==
	I0314 00:17:43.975807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="130.014µs"
	I0314 00:17:44.274486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.84759ms"
	I0314 00:17:44.274889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="152.138µs"
	I0314 00:18:16.199309       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:18:16.199772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:18:16.211191       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.2.0/24"]
	I0314 00:18:16.228911       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ffqpb"
	I0314 00:18:16.229014       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gxf88"
	I0314 00:18:19.599699       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-507871-m03"
	I0314 00:18:19.599907       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-507871-m03 event: Registered Node multinode-507871-m03 in Controller"
	I0314 00:18:27.151974       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:00.843229       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:03.537719       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:03.538744       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:19:03.567862       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.3.0/24"]
	I0314 00:19:11.033268       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:54.657828       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-507871-m03 status is now: NodeNotReady"
	I0314 00:19:54.662152       1 event.go:307] "Event occurred" object="multinode-507871-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-507871-m02 status is now: NodeNotReady"
	I0314 00:19:54.673805       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-gxf88" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.684697       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lpvtz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.693866       1 event.go:307] "Event occurred" object="kube-system/kindnet-ffqpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.698429       1 event.go:307] "Event occurred" object="kube-system/kindnet-jzhqr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.719455       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-498th" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.738215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.254429ms"
	I0314 00:19:54.738369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.551µs"
	
	
	==> kube-controller-manager [b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce] <==
	I0314 00:23:34.120993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.401µs"
	I0314 00:23:39.224539       1 event.go:307] "Event occurred" object="multinode-507871-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-507871-m02 event: Removing Node multinode-507871-m02 from Controller"
	I0314 00:23:39.901940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m02\" does not exist"
	I0314 00:23:39.904038       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-498th" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-498th"
	I0314 00:23:39.913118       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m02" podCIDRs=["10.244.1.0/24"]
	I0314 00:23:40.392862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="89.734µs"
	I0314 00:23:40.418780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.846µs"
	I0314 00:23:40.429225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="69.363µs"
	I0314 00:23:40.463149       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.243µs"
	I0314 00:23:40.477275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.824µs"
	I0314 00:23:40.482914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.988µs"
	I0314 00:23:44.225747       1 event.go:307] "Event occurred" object="multinode-507871-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-507871-m02 event: Registered Node multinode-507871-m02 in Controller"
	I0314 00:23:47.034376       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:23:47.055204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.924µs"
	I0314 00:23:47.073463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105.476µs"
	I0314 00:23:49.239529       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-6624j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-6624j"
	I0314 00:23:50.910186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.894089ms"
	I0314 00:23:50.910339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.599µs"
	I0314 00:24:06.811953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:09.243151       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-507871-m03 event: Removing Node multinode-507871-m03 from Controller"
	I0314 00:24:09.374763       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:24:09.375961       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:09.391087       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.2.0/24"]
	I0314 00:24:14.244613       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-507871-m03 event: Registered Node multinode-507871-m03 in Controller"
	I0314 00:24:17.008537       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	
	
	==> kube-proxy [43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd] <==
	I0314 00:16:51.887205       1 server_others.go:69] "Using iptables proxy"
	I0314 00:16:51.904828       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0314 00:16:51.953679       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:16:51.953701       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:16:51.956098       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:16:51.956409       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:16:51.956822       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:16:51.956869       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:16:51.958871       1 config.go:188] "Starting service config controller"
	I0314 00:16:51.959182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:16:51.959304       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:16:51.959358       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:16:51.962650       1 config.go:315] "Starting node config controller"
	I0314 00:16:51.962692       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:16:52.059478       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:16:52.059728       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:16:52.063753       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f] <==
	I0314 00:22:58.266816       1 server_others.go:69] "Using iptables proxy"
	I0314 00:22:58.278899       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0314 00:22:58.349690       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:22:58.349743       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:22:58.354881       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:22:58.354988       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:22:58.355201       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:22:58.355233       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:22:58.357233       1 config.go:188] "Starting service config controller"
	I0314 00:22:58.357277       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:22:58.357301       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:22:58.357305       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:22:58.357910       1 config.go:315] "Starting node config controller"
	I0314 00:22:58.357940       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:22:58.458089       1 shared_informer.go:318] Caches are synced for node config
	I0314 00:22:58.458149       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:22:58.458173       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a] <==
	E0314 00:16:34.340017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 00:16:35.213867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 00:16:35.213918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 00:16:35.282065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 00:16:35.282197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 00:16:35.321892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 00:16:35.322029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 00:16:35.396382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 00:16:35.396820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 00:16:35.580946       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 00:16:35.581114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 00:16:35.593266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 00:16:35.593311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 00:16:35.598785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 00:16:35.598943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 00:16:35.602849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 00:16:35.602951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 00:16:35.640062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 00:16:35.640111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 00:16:35.850436       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 00:16:35.850496       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 00:16:38.929268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:21:16.693186       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 00:21:16.693295       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0314 00:21:16.711514       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf] <==
	I0314 00:22:53.982622       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:22:56.708065       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:22:56.709502       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:22:56.709707       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:22:56.709743       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:22:56.740101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:22:56.740152       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:22:56.741488       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:22:56.741644       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:22:56.742100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:22:56.742233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:22:56.845291       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.504174    3408 topology_manager.go:215] "Topology Admit Handler" podUID="53e3b884-181c-4bbd-a913-dc0e653a6049" podNamespace="kube-system" podName="coredns-5dd5756b68-9vlnk"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.504287    3408 topology_manager.go:215] "Topology Admit Handler" podUID="8de44d7b-d708-4151-a9d2-331fe7733508" podNamespace="kube-system" podName="storage-provisioner"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.504354    3408 topology_manager.go:215] "Topology Admit Handler" podUID="fa6241da-7a44-4dd4-b00a-b3a008151fb5" podNamespace="default" podName="busybox-5b5d89c9d6-vrskm"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.510890    3408 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.511607    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96e17dd5-4e30-48aa-8f37-e42db89652da-xtables-lock\") pod \"kube-proxy-vlzf2\" (UID: \"96e17dd5-4e30-48aa-8f37-e42db89652da\") " pod="kube-system/kube-proxy-vlzf2"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.511719    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2baacd-f40a-400c-b587-a4be4745ee78-xtables-lock\") pod \"kindnet-4lwzg\" (UID: \"6d2baacd-f40a-400c-b587-a4be4745ee78\") " pod="kube-system/kindnet-4lwzg"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.511818    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8de44d7b-d708-4151-a9d2-331fe7733508-tmp\") pod \"storage-provisioner\" (UID: \"8de44d7b-d708-4151-a9d2-331fe7733508\") " pod="kube-system/storage-provisioner"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.511884    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d2baacd-f40a-400c-b587-a4be4745ee78-cni-cfg\") pod \"kindnet-4lwzg\" (UID: \"6d2baacd-f40a-400c-b587-a4be4745ee78\") " pod="kube-system/kindnet-4lwzg"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.511969    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96e17dd5-4e30-48aa-8f37-e42db89652da-lib-modules\") pod \"kube-proxy-vlzf2\" (UID: \"96e17dd5-4e30-48aa-8f37-e42db89652da\") " pod="kube-system/kube-proxy-vlzf2"
	Mar 14 00:22:57 multinode-507871 kubelet[3408]: I0314 00:22:57.512030    3408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2baacd-f40a-400c-b587-a4be4745ee78-lib-modules\") pod \"kindnet-4lwzg\" (UID: \"6d2baacd-f40a-400c-b587-a4be4745ee78\") " pod="kube-system/kindnet-4lwzg"
	Mar 14 00:23:01 multinode-507871 kubelet[3408]: I0314 00:23:01.692776    3408 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.586635    3408 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:23:52 multinode-507871 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:23:52 multinode-507871 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:23:52 multinode-507871 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:23:52 multinode-507871 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.646941    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podfa6241da-7a44-4dd4-b00a-b3a008151fb5/crio-9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Error finding container 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Status 404 returned error can't find the container with id 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.647417    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod8de44d7b-d708-4151-a9d2-331fe7733508/crio-a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Error finding container a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Status 404 returned error can't find the container with id a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.647802    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod6f25c10b051e82f6e13ba3c3d00847e1/crio-c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Error finding container c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Status 404 returned error can't find the container with id c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.648156    3408 manager.go:1106] Failed to create existing container: /kubepods/pod6d2baacd-f40a-400c-b587-a4be4745ee78/crio-8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Error finding container 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Status 404 returned error can't find the container with id 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.648464    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda57cfd10e401230b197eb5cbd3693e85/crio-8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Error finding container 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Status 404 returned error can't find the container with id 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.648787    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd3269078ba2f0710950742881b1ad45f/crio-c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Error finding container c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Status 404 returned error can't find the container with id c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.649089    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod96e17dd5-4e30-48aa-8f37-e42db89652da/crio-2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Error finding container 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Status 404 returned error can't find the container with id 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.649378    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda6acb99e901a4d3e69f051bbe79cf00c/crio-181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Error finding container 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Status 404 returned error can't find the container with id 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f
	Mar 14 00:23:52 multinode-507871 kubelet[3408]: E0314 00:23:52.649663    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod53e3b884-181c-4bbd-a913-dc0e653a6049/crio-b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Error finding container b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Status 404 returned error can't find the container with id b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:24:19.348415   39901 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-507871 -n multinode-507871
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-507871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (307.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 stop
E0314 00:24:44.448680   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-507871 stop: exit status 82 (2m0.494279098s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-507871-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-507871 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-507871 status: exit status 3 (18.828381231s)

                                                
                                                
-- stdout --
	multinode-507871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-507871-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:26:43.095086   40442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E0314 00:26:43.095119   40442 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-507871 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-507871 -n multinode-507871
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-507871 logs -n 25: (1.525816634s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871:/home/docker/cp-test_multinode-507871-m02_multinode-507871.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871 sudo cat                                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m02_multinode-507871.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03:/home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871-m03 sudo cat                                   | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp testdata/cp-test.txt                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871:/home/docker/cp-test_multinode-507871-m03_multinode-507871.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871 sudo cat                                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m03_multinode-507871.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt                       | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m02:/home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n                                                                 | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | multinode-507871-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-507871 ssh -n multinode-507871-m02 sudo cat                                   | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-507871 node stop m03                                                          | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:18 UTC |
	| node    | multinode-507871 node start                                                             | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:18 UTC | 14 Mar 24 00:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-507871                                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:19 UTC |                     |
	| stop    | -p multinode-507871                                                                     | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:19 UTC |                     |
	| start   | -p multinode-507871                                                                     | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:21 UTC | 14 Mar 24 00:24 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-507871                                                                | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:24 UTC |                     |
	| node    | multinode-507871 node delete                                                            | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:24 UTC | 14 Mar 24 00:24 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-507871 stop                                                                   | multinode-507871 | jenkins | v1.32.0 | 14 Mar 24 00:24 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:21:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:21:15.665094   39054 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:21:15.665226   39054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:21:15.665237   39054 out.go:304] Setting ErrFile to fd 2...
	I0314 00:21:15.665244   39054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:21:15.665430   39054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:21:15.665984   39054 out.go:298] Setting JSON to false
	I0314 00:21:15.666898   39054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3819,"bootTime":1710371857,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:21:15.666955   39054 start.go:139] virtualization: kvm guest
	I0314 00:21:15.669104   39054 out.go:177] * [multinode-507871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:21:15.670803   39054 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:21:15.670850   39054 notify.go:220] Checking for updates...
	I0314 00:21:15.672145   39054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:21:15.673749   39054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:21:15.674963   39054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:21:15.676209   39054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:21:15.677459   39054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:21:15.679164   39054 config.go:182] Loaded profile config "multinode-507871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:21:15.679268   39054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:21:15.679669   39054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:21:15.679719   39054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:21:15.694867   39054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0314 00:21:15.695295   39054 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:21:15.695809   39054 main.go:141] libmachine: Using API Version  1
	I0314 00:21:15.695825   39054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:21:15.696160   39054 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:21:15.696431   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.732193   39054 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:21:15.733563   39054 start.go:297] selected driver: kvm2
	I0314 00:21:15.733578   39054 start.go:901] validating driver "kvm2" against &{Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:21:15.733728   39054 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:21:15.734047   39054 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:21:15.734110   39054 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:21:15.749198   39054 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:21:15.750219   39054 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:21:15.750341   39054 cni.go:84] Creating CNI manager for ""
	I0314 00:21:15.750367   39054 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 00:21:15.750458   39054 start.go:340] cluster config:
	{Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:21:15.750701   39054 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:21:15.752608   39054 out.go:177] * Starting "multinode-507871" primary control-plane node in "multinode-507871" cluster
	I0314 00:21:15.754181   39054 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:21:15.754218   39054 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 00:21:15.754225   39054 cache.go:56] Caching tarball of preloaded images
	I0314 00:21:15.754341   39054 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:21:15.754361   39054 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 00:21:15.754481   39054 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/config.json ...
	I0314 00:21:15.754679   39054 start.go:360] acquireMachinesLock for multinode-507871: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:21:15.754718   39054 start.go:364] duration metric: took 21.911µs to acquireMachinesLock for "multinode-507871"
	I0314 00:21:15.754735   39054 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:21:15.754742   39054 fix.go:54] fixHost starting: 
	I0314 00:21:15.755030   39054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:21:15.755061   39054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:21:15.769002   39054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0314 00:21:15.769472   39054 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:21:15.770004   39054 main.go:141] libmachine: Using API Version  1
	I0314 00:21:15.770030   39054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:21:15.770382   39054 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:21:15.770597   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.770790   39054 main.go:141] libmachine: (multinode-507871) Calling .GetState
	I0314 00:21:15.772509   39054 fix.go:112] recreateIfNeeded on multinode-507871: state=Running err=<nil>
	W0314 00:21:15.772525   39054 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:21:15.774755   39054 out.go:177] * Updating the running kvm2 "multinode-507871" VM ...
	I0314 00:21:15.776347   39054 machine.go:94] provisionDockerMachine start ...
	I0314 00:21:15.776371   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:21:15.776642   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:15.779523   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.779991   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:15.780018   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.780148   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:15.780344   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.780494   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.780692   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:15.780853   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:15.781039   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:15.781050   39054 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:21:15.900769   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-507871
	
	I0314 00:21:15.900805   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:15.901035   39054 buildroot.go:166] provisioning hostname "multinode-507871"
	I0314 00:21:15.901060   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:15.901297   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:15.904312   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.904745   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:15.904776   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:15.905121   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:15.905324   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.905488   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:15.905708   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:15.906009   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:15.906192   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:15.906208   39054 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-507871 && echo "multinode-507871" | sudo tee /etc/hostname
	I0314 00:21:16.041344   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-507871
	
	I0314 00:21:16.041383   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.044500   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.045011   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.045032   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.045225   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.045412   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.045647   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.045795   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.046004   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:16.046237   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:16.046263   39054 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-507871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-507871/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-507871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:21:16.176217   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:21:16.176249   39054 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:21:16.176271   39054 buildroot.go:174] setting up certificates
	I0314 00:21:16.176284   39054 provision.go:84] configureAuth start
	I0314 00:21:16.176364   39054 main.go:141] libmachine: (multinode-507871) Calling .GetMachineName
	I0314 00:21:16.176669   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:21:16.179919   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.180384   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.180425   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.180636   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.182725   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.183088   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.183111   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.183274   39054 provision.go:143] copyHostCerts
	I0314 00:21:16.183299   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:21:16.183334   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:21:16.183343   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:21:16.183409   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:21:16.183494   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:21:16.183511   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:21:16.183518   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:21:16.183541   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:21:16.183594   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:21:16.183614   39054 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:21:16.183621   39054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:21:16.183640   39054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:21:16.183703   39054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.multinode-507871 san=[127.0.0.1 192.168.39.60 localhost minikube multinode-507871]
	I0314 00:21:16.376767   39054 provision.go:177] copyRemoteCerts
	I0314 00:21:16.376835   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:21:16.376855   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.379603   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.380024   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.380054   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.380195   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.380350   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.380486   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.380612   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:21:16.470272   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 00:21:16.470350   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:21:16.497147   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 00:21:16.497235   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 00:21:16.531695   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 00:21:16.531774   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:21:16.561281   39054 provision.go:87] duration metric: took 384.986211ms to configureAuth
	I0314 00:21:16.561308   39054 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:21:16.561511   39054 config.go:182] Loaded profile config "multinode-507871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:21:16.561583   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:21:16.564259   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.564736   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:21:16.564764   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:21:16.564967   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:21:16.565147   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.565275   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:21:16.565435   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:21:16.565569   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:16.565766   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:21:16.565783   39054 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:22:47.424781   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:22:47.424810   39054 machine.go:97] duration metric: took 1m31.64844843s to provisionDockerMachine
	I0314 00:22:47.424826   39054 start.go:293] postStartSetup for "multinode-507871" (driver="kvm2")
	I0314 00:22:47.424855   39054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:22:47.424882   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.425221   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:22:47.425264   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.428387   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.428781   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.428806   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.428953   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.429135   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.429324   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.429466   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.519091   39054 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:22:47.524059   39054 command_runner.go:130] > NAME=Buildroot
	I0314 00:22:47.524091   39054 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 00:22:47.524134   39054 command_runner.go:130] > ID=buildroot
	I0314 00:22:47.524143   39054 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 00:22:47.524151   39054 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 00:22:47.524211   39054 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:22:47.524234   39054 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:22:47.524316   39054 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:22:47.524408   39054 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:22:47.524418   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /etc/ssl/certs/122682.pem
	I0314 00:22:47.524545   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:22:47.535141   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:22:47.563284   39054 start.go:296] duration metric: took 138.442833ms for postStartSetup
	I0314 00:22:47.563354   39054 fix.go:56] duration metric: took 1m31.808587962s for fixHost
	I0314 00:22:47.563377   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.566331   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.566821   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.566846   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.567011   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.567224   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.567390   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.567558   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.567738   39054 main.go:141] libmachine: Using SSH client type: native
	I0314 00:22:47.567956   39054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0314 00:22:47.567970   39054 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:22:47.680055   39054 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710375767.648719361
	
	I0314 00:22:47.680077   39054 fix.go:216] guest clock: 1710375767.648719361
	I0314 00:22:47.680083   39054 fix.go:229] Guest: 2024-03-14 00:22:47.648719361 +0000 UTC Remote: 2024-03-14 00:22:47.563360019 +0000 UTC m=+91.948892899 (delta=85.359342ms)
	I0314 00:22:47.680128   39054 fix.go:200] guest clock delta is within tolerance: 85.359342ms
	I0314 00:22:47.680134   39054 start.go:83] releasing machines lock for "multinode-507871", held for 1m31.925406939s
	I0314 00:22:47.680158   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.680415   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:22:47.683326   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.683802   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.683834   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.684001   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684581   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684737   39054 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:22:47.684815   39054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:22:47.684876   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.684979   39054 ssh_runner.go:195] Run: cat /version.json
	I0314 00:22:47.684997   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:22:47.687728   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688014   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688188   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.688215   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688351   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:47.688368   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.688374   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:47.688554   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:22:47.688570   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.688734   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.688743   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:22:47.688927   39054 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:22:47.688969   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.689048   39054 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:22:47.768061   39054 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 00:22:47.768240   39054 ssh_runner.go:195] Run: systemctl --version
	I0314 00:22:47.805263   39054 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 00:22:47.806054   39054 command_runner.go:130] > systemd 252 (252)
	I0314 00:22:47.806088   39054 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 00:22:47.806150   39054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:22:47.973520   39054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 00:22:47.980948   39054 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 00:22:47.981271   39054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:22:47.981346   39054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:22:47.990878   39054 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 00:22:47.990900   39054 start.go:494] detecting cgroup driver to use...
	I0314 00:22:47.990960   39054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:22:48.007037   39054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:22:48.021313   39054 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:22:48.021374   39054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:22:48.035201   39054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:22:48.048907   39054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:22:48.190809   39054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:22:48.355710   39054 docker.go:233] disabling docker service ...
	I0314 00:22:48.355784   39054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:22:48.377966   39054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:22:48.393677   39054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:22:48.542144   39054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:22:48.688125   39054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:22:48.703800   39054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:22:48.723432   39054 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0314 00:22:48.723820   39054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:22:48.723872   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.737298   39054 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:22:48.737376   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.748681   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.759991   39054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:22:48.770814   39054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:22:48.782152   39054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:22:48.791622   39054 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 00:22:48.791884   39054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:22:48.801770   39054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:22:48.948433   39054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:22:49.216474   39054 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:22:49.216543   39054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:22:49.221707   39054 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0314 00:22:49.221728   39054 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 00:22:49.221737   39054 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0314 00:22:49.221749   39054 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 00:22:49.221757   39054 command_runner.go:130] > Access: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221766   39054 command_runner.go:130] > Modify: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221774   39054 command_runner.go:130] > Change: 2024-03-14 00:22:49.061180765 +0000
	I0314 00:22:49.221780   39054 command_runner.go:130] >  Birth: -
	I0314 00:22:49.221796   39054 start.go:562] Will wait 60s for crictl version
	I0314 00:22:49.221851   39054 ssh_runner.go:195] Run: which crictl
	I0314 00:22:49.225809   39054 command_runner.go:130] > /usr/bin/crictl
	I0314 00:22:49.225982   39054 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:22:49.267283   39054 command_runner.go:130] > Version:  0.1.0
	I0314 00:22:49.267309   39054 command_runner.go:130] > RuntimeName:  cri-o
	I0314 00:22:49.267316   39054 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0314 00:22:49.267329   39054 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 00:22:49.267349   39054 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:22:49.267432   39054 ssh_runner.go:195] Run: crio --version
	I0314 00:22:49.304291   39054 command_runner.go:130] > crio version 1.29.1
	I0314 00:22:49.304316   39054 command_runner.go:130] > Version:        1.29.1
	I0314 00:22:49.304322   39054 command_runner.go:130] > GitCommit:      unknown
	I0314 00:22:49.304326   39054 command_runner.go:130] > GitCommitDate:  unknown
	I0314 00:22:49.304330   39054 command_runner.go:130] > GitTreeState:   clean
	I0314 00:22:49.304343   39054 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 00:22:49.304350   39054 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 00:22:49.304357   39054 command_runner.go:130] > Compiler:       gc
	I0314 00:22:49.304364   39054 command_runner.go:130] > Platform:       linux/amd64
	I0314 00:22:49.304370   39054 command_runner.go:130] > Linkmode:       dynamic
	I0314 00:22:49.304380   39054 command_runner.go:130] > BuildTags:      
	I0314 00:22:49.304388   39054 command_runner.go:130] >   containers_image_ostree_stub
	I0314 00:22:49.304395   39054 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 00:22:49.304402   39054 command_runner.go:130] >   btrfs_noversion
	I0314 00:22:49.304409   39054 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 00:22:49.304432   39054 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 00:22:49.304439   39054 command_runner.go:130] >   seccomp
	I0314 00:22:49.304446   39054 command_runner.go:130] > LDFlags:          unknown
	I0314 00:22:49.304453   39054 command_runner.go:130] > SeccompEnabled:   true
	I0314 00:22:49.304459   39054 command_runner.go:130] > AppArmorEnabled:  false
	I0314 00:22:49.304560   39054 ssh_runner.go:195] Run: crio --version
	I0314 00:22:49.335307   39054 command_runner.go:130] > crio version 1.29.1
	I0314 00:22:49.335334   39054 command_runner.go:130] > Version:        1.29.1
	I0314 00:22:49.335357   39054 command_runner.go:130] > GitCommit:      unknown
	I0314 00:22:49.335364   39054 command_runner.go:130] > GitCommitDate:  unknown
	I0314 00:22:49.335370   39054 command_runner.go:130] > GitTreeState:   clean
	I0314 00:22:49.335378   39054 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 00:22:49.335390   39054 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 00:22:49.335397   39054 command_runner.go:130] > Compiler:       gc
	I0314 00:22:49.335406   39054 command_runner.go:130] > Platform:       linux/amd64
	I0314 00:22:49.335413   39054 command_runner.go:130] > Linkmode:       dynamic
	I0314 00:22:49.335429   39054 command_runner.go:130] > BuildTags:      
	I0314 00:22:49.335438   39054 command_runner.go:130] >   containers_image_ostree_stub
	I0314 00:22:49.335447   39054 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 00:22:49.335454   39054 command_runner.go:130] >   btrfs_noversion
	I0314 00:22:49.335463   39054 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 00:22:49.335474   39054 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 00:22:49.335479   39054 command_runner.go:130] >   seccomp
	I0314 00:22:49.335486   39054 command_runner.go:130] > LDFlags:          unknown
	I0314 00:22:49.335495   39054 command_runner.go:130] > SeccompEnabled:   true
	I0314 00:22:49.335500   39054 command_runner.go:130] > AppArmorEnabled:  false
	I0314 00:22:49.338472   39054 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:22:49.339951   39054 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:22:49.342449   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:49.342823   39054 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:22:49.342854   39054 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:22:49.343095   39054 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:22:49.347675   39054 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0314 00:22:49.347888   39054 kubeadm.go:877] updating cluster {Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:22:49.348020   39054 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:22:49.348086   39054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:22:49.394096   39054 command_runner.go:130] > {
	I0314 00:22:49.394121   39054 command_runner.go:130] >   "images": [
	I0314 00:22:49.394126   39054 command_runner.go:130] >     {
	I0314 00:22:49.394138   39054 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 00:22:49.394144   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394153   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 00:22:49.394159   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394165   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394176   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 00:22:49.394188   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 00:22:49.394195   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394202   39054 command_runner.go:130] >       "size": "65258016",
	I0314 00:22:49.394210   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394219   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394238   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394249   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394256   39054 command_runner.go:130] >     },
	I0314 00:22:49.394262   39054 command_runner.go:130] >     {
	I0314 00:22:49.394273   39054 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 00:22:49.394283   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394293   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 00:22:49.394315   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394322   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394335   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 00:22:49.394349   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 00:22:49.394359   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394366   39054 command_runner.go:130] >       "size": "65291810",
	I0314 00:22:49.394375   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394391   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394402   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394412   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394419   39054 command_runner.go:130] >     },
	I0314 00:22:49.394428   39054 command_runner.go:130] >     {
	I0314 00:22:49.394440   39054 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 00:22:49.394451   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394464   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 00:22:49.394470   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394478   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394494   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 00:22:49.394508   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 00:22:49.394517   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394525   39054 command_runner.go:130] >       "size": "1363676",
	I0314 00:22:49.394534   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394540   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394545   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394551   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394557   39054 command_runner.go:130] >     },
	I0314 00:22:49.394562   39054 command_runner.go:130] >     {
	I0314 00:22:49.394575   39054 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 00:22:49.394585   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394595   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 00:22:49.394604   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394611   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394625   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 00:22:49.394649   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 00:22:49.394659   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394667   39054 command_runner.go:130] >       "size": "31470524",
	I0314 00:22:49.394683   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394693   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394700   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394710   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394717   39054 command_runner.go:130] >     },
	I0314 00:22:49.394725   39054 command_runner.go:130] >     {
	I0314 00:22:49.394736   39054 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 00:22:49.394753   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394780   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 00:22:49.394787   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394794   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394815   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 00:22:49.394831   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 00:22:49.394841   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394851   39054 command_runner.go:130] >       "size": "53621675",
	I0314 00:22:49.394861   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.394868   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.394875   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.394885   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.394891   39054 command_runner.go:130] >     },
	I0314 00:22:49.394898   39054 command_runner.go:130] >     {
	I0314 00:22:49.394911   39054 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 00:22:49.394922   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.394935   39054 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 00:22:49.394945   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394953   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.394968   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 00:22:49.394983   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 00:22:49.394991   39054 command_runner.go:130] >       ],
	I0314 00:22:49.394999   39054 command_runner.go:130] >       "size": "295456551",
	I0314 00:22:49.395008   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395017   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395026   39054 command_runner.go:130] >       },
	I0314 00:22:49.395038   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395047   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395054   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395070   39054 command_runner.go:130] >     },
	I0314 00:22:49.395079   39054 command_runner.go:130] >     {
	I0314 00:22:49.395089   39054 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 00:22:49.395099   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395108   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 00:22:49.395116   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395123   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395139   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 00:22:49.395154   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 00:22:49.395163   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395170   39054 command_runner.go:130] >       "size": "127226832",
	I0314 00:22:49.395180   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395187   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395196   39054 command_runner.go:130] >       },
	I0314 00:22:49.395203   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395211   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395218   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395227   39054 command_runner.go:130] >     },
	I0314 00:22:49.395233   39054 command_runner.go:130] >     {
	I0314 00:22:49.395244   39054 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 00:22:49.395254   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395263   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 00:22:49.395272   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395279   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395312   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 00:22:49.395332   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 00:22:49.395338   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395344   39054 command_runner.go:130] >       "size": "123261750",
	I0314 00:22:49.395353   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395360   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395369   39054 command_runner.go:130] >       },
	I0314 00:22:49.395376   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395386   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395394   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395402   39054 command_runner.go:130] >     },
	I0314 00:22:49.395408   39054 command_runner.go:130] >     {
	I0314 00:22:49.395426   39054 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 00:22:49.395437   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395449   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 00:22:49.395456   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395466   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395475   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 00:22:49.395485   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 00:22:49.395490   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395497   39054 command_runner.go:130] >       "size": "74749335",
	I0314 00:22:49.395503   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.395509   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395515   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395522   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395527   39054 command_runner.go:130] >     },
	I0314 00:22:49.395533   39054 command_runner.go:130] >     {
	I0314 00:22:49.395543   39054 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 00:22:49.395549   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395558   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 00:22:49.395564   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395571   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395582   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 00:22:49.395594   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 00:22:49.395603   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395612   39054 command_runner.go:130] >       "size": "61551410",
	I0314 00:22:49.395620   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395627   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.395636   39054 command_runner.go:130] >       },
	I0314 00:22:49.395644   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395653   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395661   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.395669   39054 command_runner.go:130] >     },
	I0314 00:22:49.395675   39054 command_runner.go:130] >     {
	I0314 00:22:49.395689   39054 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 00:22:49.395698   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.395705   39054 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 00:22:49.395711   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395728   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.395744   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 00:22:49.395759   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 00:22:49.395768   39054 command_runner.go:130] >       ],
	I0314 00:22:49.395775   39054 command_runner.go:130] >       "size": "750414",
	I0314 00:22:49.395784   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.395791   39054 command_runner.go:130] >         "value": "65535"
	I0314 00:22:49.395799   39054 command_runner.go:130] >       },
	I0314 00:22:49.395807   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.395816   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.395825   39054 command_runner.go:130] >       "pinned": true
	I0314 00:22:49.395833   39054 command_runner.go:130] >     }
	I0314 00:22:49.395840   39054 command_runner.go:130] >   ]
	I0314 00:22:49.395846   39054 command_runner.go:130] > }
	I0314 00:22:49.396041   39054 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:22:49.396054   39054 crio.go:415] Images already preloaded, skipping extraction
	I0314 00:22:49.396112   39054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:22:49.432649   39054 command_runner.go:130] > {
	I0314 00:22:49.432669   39054 command_runner.go:130] >   "images": [
	I0314 00:22:49.432674   39054 command_runner.go:130] >     {
	I0314 00:22:49.432681   39054 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 00:22:49.432687   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432692   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 00:22:49.432696   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432699   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432708   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 00:22:49.432715   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 00:22:49.432719   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432723   39054 command_runner.go:130] >       "size": "65258016",
	I0314 00:22:49.432737   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432743   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432753   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432763   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432768   39054 command_runner.go:130] >     },
	I0314 00:22:49.432773   39054 command_runner.go:130] >     {
	I0314 00:22:49.432782   39054 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 00:22:49.432788   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432798   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 00:22:49.432804   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432810   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432824   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 00:22:49.432837   39054 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 00:22:49.432846   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432856   39054 command_runner.go:130] >       "size": "65291810",
	I0314 00:22:49.432862   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432880   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432886   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432890   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432894   39054 command_runner.go:130] >     },
	I0314 00:22:49.432897   39054 command_runner.go:130] >     {
	I0314 00:22:49.432903   39054 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 00:22:49.432907   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432912   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 00:22:49.432916   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432925   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.432935   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 00:22:49.432942   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 00:22:49.432946   39054 command_runner.go:130] >       ],
	I0314 00:22:49.432950   39054 command_runner.go:130] >       "size": "1363676",
	I0314 00:22:49.432954   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.432958   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.432964   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.432969   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.432972   39054 command_runner.go:130] >     },
	I0314 00:22:49.432975   39054 command_runner.go:130] >     {
	I0314 00:22:49.432985   39054 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 00:22:49.432992   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.432997   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 00:22:49.433000   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433004   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433012   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 00:22:49.433030   39054 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 00:22:49.433041   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433045   39054 command_runner.go:130] >       "size": "31470524",
	I0314 00:22:49.433052   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433057   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433060   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433064   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433067   39054 command_runner.go:130] >     },
	I0314 00:22:49.433070   39054 command_runner.go:130] >     {
	I0314 00:22:49.433087   39054 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 00:22:49.433094   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433098   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 00:22:49.433102   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433106   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433113   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 00:22:49.433123   39054 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 00:22:49.433127   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433130   39054 command_runner.go:130] >       "size": "53621675",
	I0314 00:22:49.433134   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433138   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433142   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433147   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433150   39054 command_runner.go:130] >     },
	I0314 00:22:49.433153   39054 command_runner.go:130] >     {
	I0314 00:22:49.433159   39054 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 00:22:49.433162   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433167   39054 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 00:22:49.433170   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433174   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433182   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 00:22:49.433194   39054 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 00:22:49.433205   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433209   39054 command_runner.go:130] >       "size": "295456551",
	I0314 00:22:49.433212   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433215   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433220   39054 command_runner.go:130] >       },
	I0314 00:22:49.433224   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433227   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433231   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433234   39054 command_runner.go:130] >     },
	I0314 00:22:49.433237   39054 command_runner.go:130] >     {
	I0314 00:22:49.433243   39054 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 00:22:49.433247   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433251   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 00:22:49.433255   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433259   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433267   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 00:22:49.433277   39054 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 00:22:49.433280   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433284   39054 command_runner.go:130] >       "size": "127226832",
	I0314 00:22:49.433290   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433294   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433297   39054 command_runner.go:130] >       },
	I0314 00:22:49.433303   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433307   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433313   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433316   39054 command_runner.go:130] >     },
	I0314 00:22:49.433320   39054 command_runner.go:130] >     {
	I0314 00:22:49.433326   39054 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 00:22:49.433332   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433338   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 00:22:49.433346   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433353   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433392   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 00:22:49.433407   39054 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 00:22:49.433413   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433426   39054 command_runner.go:130] >       "size": "123261750",
	I0314 00:22:49.433434   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433439   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433445   39054 command_runner.go:130] >       },
	I0314 00:22:49.433448   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433452   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433456   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433459   39054 command_runner.go:130] >     },
	I0314 00:22:49.433462   39054 command_runner.go:130] >     {
	I0314 00:22:49.433471   39054 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 00:22:49.433476   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433481   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 00:22:49.433485   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433506   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433514   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 00:22:49.433523   39054 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 00:22:49.433529   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433535   39054 command_runner.go:130] >       "size": "74749335",
	I0314 00:22:49.433539   39054 command_runner.go:130] >       "uid": null,
	I0314 00:22:49.433543   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433546   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433550   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433553   39054 command_runner.go:130] >     },
	I0314 00:22:49.433557   39054 command_runner.go:130] >     {
	I0314 00:22:49.433562   39054 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 00:22:49.433567   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433571   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 00:22:49.433577   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433581   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433588   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 00:22:49.433597   39054 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 00:22:49.433601   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433605   39054 command_runner.go:130] >       "size": "61551410",
	I0314 00:22:49.433609   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433613   39054 command_runner.go:130] >         "value": "0"
	I0314 00:22:49.433618   39054 command_runner.go:130] >       },
	I0314 00:22:49.433627   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433633   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433636   39054 command_runner.go:130] >       "pinned": false
	I0314 00:22:49.433640   39054 command_runner.go:130] >     },
	I0314 00:22:49.433643   39054 command_runner.go:130] >     {
	I0314 00:22:49.433649   39054 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 00:22:49.433653   39054 command_runner.go:130] >       "repoTags": [
	I0314 00:22:49.433657   39054 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 00:22:49.433661   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433665   39054 command_runner.go:130] >       "repoDigests": [
	I0314 00:22:49.433677   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 00:22:49.433686   39054 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 00:22:49.433689   39054 command_runner.go:130] >       ],
	I0314 00:22:49.433693   39054 command_runner.go:130] >       "size": "750414",
	I0314 00:22:49.433697   39054 command_runner.go:130] >       "uid": {
	I0314 00:22:49.433701   39054 command_runner.go:130] >         "value": "65535"
	I0314 00:22:49.433706   39054 command_runner.go:130] >       },
	I0314 00:22:49.433710   39054 command_runner.go:130] >       "username": "",
	I0314 00:22:49.433716   39054 command_runner.go:130] >       "spec": null,
	I0314 00:22:49.433720   39054 command_runner.go:130] >       "pinned": true
	I0314 00:22:49.433726   39054 command_runner.go:130] >     }
	I0314 00:22:49.433729   39054 command_runner.go:130] >   ]
	I0314 00:22:49.433732   39054 command_runner.go:130] > }
	I0314 00:22:49.433848   39054 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:22:49.433859   39054 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:22:49.433866   39054 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.28.4 crio true true} ...
	I0314 00:22:49.433957   39054 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-507871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:22:49.434020   39054 ssh_runner.go:195] Run: crio config
	I0314 00:22:49.469042   39054 command_runner.go:130] ! time="2024-03-14 00:22:49.437811878Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0314 00:22:49.480271   39054 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0314 00:22:49.487841   39054 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0314 00:22:49.487865   39054 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0314 00:22:49.487872   39054 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0314 00:22:49.487875   39054 command_runner.go:130] > #
	I0314 00:22:49.487881   39054 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0314 00:22:49.487887   39054 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0314 00:22:49.487892   39054 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0314 00:22:49.487905   39054 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0314 00:22:49.487911   39054 command_runner.go:130] > # reload'.
	I0314 00:22:49.487919   39054 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0314 00:22:49.487932   39054 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0314 00:22:49.487941   39054 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0314 00:22:49.487951   39054 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0314 00:22:49.487957   39054 command_runner.go:130] > [crio]
	I0314 00:22:49.487967   39054 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0314 00:22:49.487975   39054 command_runner.go:130] > # containers images, in this directory.
	I0314 00:22:49.487983   39054 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0314 00:22:49.487993   39054 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0314 00:22:49.488012   39054 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0314 00:22:49.488020   39054 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0314 00:22:49.488027   39054 command_runner.go:130] > # imagestore = ""
	I0314 00:22:49.488033   39054 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0314 00:22:49.488039   39054 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0314 00:22:49.488045   39054 command_runner.go:130] > storage_driver = "overlay"
	I0314 00:22:49.488054   39054 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0314 00:22:49.488065   39054 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0314 00:22:49.488076   39054 command_runner.go:130] > storage_option = [
	I0314 00:22:49.488086   39054 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0314 00:22:49.488089   39054 command_runner.go:130] > ]
	I0314 00:22:49.488098   39054 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0314 00:22:49.488104   39054 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0314 00:22:49.488111   39054 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0314 00:22:49.488116   39054 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0314 00:22:49.488123   39054 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0314 00:22:49.488128   39054 command_runner.go:130] > # always happen on a node reboot
	I0314 00:22:49.488136   39054 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0314 00:22:49.488155   39054 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0314 00:22:49.488176   39054 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0314 00:22:49.488184   39054 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0314 00:22:49.488192   39054 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0314 00:22:49.488202   39054 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0314 00:22:49.488212   39054 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0314 00:22:49.488218   39054 command_runner.go:130] > # internal_wipe = true
	I0314 00:22:49.488231   39054 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0314 00:22:49.488243   39054 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0314 00:22:49.488252   39054 command_runner.go:130] > # internal_repair = false
	I0314 00:22:49.488264   39054 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0314 00:22:49.488276   39054 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0314 00:22:49.488288   39054 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0314 00:22:49.488297   39054 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0314 00:22:49.488303   39054 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0314 00:22:49.488309   39054 command_runner.go:130] > [crio.api]
	I0314 00:22:49.488315   39054 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0314 00:22:49.488324   39054 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0314 00:22:49.488337   39054 command_runner.go:130] > # IP address on which the stream server will listen.
	I0314 00:22:49.488349   39054 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0314 00:22:49.488363   39054 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0314 00:22:49.488374   39054 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0314 00:22:49.488383   39054 command_runner.go:130] > # stream_port = "0"
	I0314 00:22:49.488394   39054 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0314 00:22:49.488401   39054 command_runner.go:130] > # stream_enable_tls = false
	I0314 00:22:49.488410   39054 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0314 00:22:49.488420   39054 command_runner.go:130] > # stream_idle_timeout = ""
	I0314 00:22:49.488433   39054 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0314 00:22:49.488449   39054 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0314 00:22:49.488457   39054 command_runner.go:130] > # minutes.
	I0314 00:22:49.488466   39054 command_runner.go:130] > # stream_tls_cert = ""
	I0314 00:22:49.488477   39054 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0314 00:22:49.488486   39054 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0314 00:22:49.488495   39054 command_runner.go:130] > # stream_tls_key = ""
	I0314 00:22:49.488509   39054 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0314 00:22:49.488521   39054 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0314 00:22:49.488551   39054 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0314 00:22:49.488561   39054 command_runner.go:130] > # stream_tls_ca = ""
	I0314 00:22:49.488570   39054 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 00:22:49.488578   39054 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0314 00:22:49.488596   39054 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 00:22:49.488607   39054 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0314 00:22:49.488620   39054 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0314 00:22:49.488637   39054 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0314 00:22:49.488651   39054 command_runner.go:130] > [crio.runtime]
	I0314 00:22:49.488660   39054 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0314 00:22:49.488671   39054 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0314 00:22:49.488680   39054 command_runner.go:130] > # "nofile=1024:2048"
	I0314 00:22:49.488693   39054 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0314 00:22:49.488703   39054 command_runner.go:130] > # default_ulimits = [
	I0314 00:22:49.488711   39054 command_runner.go:130] > # ]
	I0314 00:22:49.488724   39054 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0314 00:22:49.488733   39054 command_runner.go:130] > # no_pivot = false
	I0314 00:22:49.488741   39054 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0314 00:22:49.488750   39054 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0314 00:22:49.488761   39054 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0314 00:22:49.488774   39054 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0314 00:22:49.488785   39054 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0314 00:22:49.488797   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 00:22:49.488807   39054 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0314 00:22:49.488816   39054 command_runner.go:130] > # Cgroup setting for conmon
	I0314 00:22:49.488828   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0314 00:22:49.488835   39054 command_runner.go:130] > conmon_cgroup = "pod"
	I0314 00:22:49.488843   39054 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0314 00:22:49.488855   39054 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0314 00:22:49.488874   39054 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 00:22:49.488883   39054 command_runner.go:130] > conmon_env = [
	I0314 00:22:49.488894   39054 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 00:22:49.488902   39054 command_runner.go:130] > ]
	I0314 00:22:49.488913   39054 command_runner.go:130] > # Additional environment variables to set for all the
	I0314 00:22:49.488921   39054 command_runner.go:130] > # containers. These are overridden if set in the
	I0314 00:22:49.488929   39054 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0314 00:22:49.488939   39054 command_runner.go:130] > # default_env = [
	I0314 00:22:49.488948   39054 command_runner.go:130] > # ]
	I0314 00:22:49.488960   39054 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0314 00:22:49.488975   39054 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0314 00:22:49.488983   39054 command_runner.go:130] > # selinux = false
	I0314 00:22:49.488993   39054 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0314 00:22:49.489003   39054 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0314 00:22:49.489019   39054 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0314 00:22:49.489030   39054 command_runner.go:130] > # seccomp_profile = ""
	I0314 00:22:49.489039   39054 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0314 00:22:49.489050   39054 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0314 00:22:49.489062   39054 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0314 00:22:49.489073   39054 command_runner.go:130] > # which might increase security.
	I0314 00:22:49.489080   39054 command_runner.go:130] > # This option is currently deprecated,
	I0314 00:22:49.489090   39054 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0314 00:22:49.489098   39054 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0314 00:22:49.489112   39054 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0314 00:22:49.489125   39054 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0314 00:22:49.489138   39054 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0314 00:22:49.489150   39054 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0314 00:22:49.489162   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.489172   39054 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0314 00:22:49.489180   39054 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0314 00:22:49.489189   39054 command_runner.go:130] > # the cgroup blockio controller.
	I0314 00:22:49.489199   39054 command_runner.go:130] > # blockio_config_file = ""
	I0314 00:22:49.489213   39054 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0314 00:22:49.489222   39054 command_runner.go:130] > # blockio parameters.
	I0314 00:22:49.489231   39054 command_runner.go:130] > # blockio_reload = false
	I0314 00:22:49.489245   39054 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0314 00:22:49.489254   39054 command_runner.go:130] > # irqbalance daemon.
	I0314 00:22:49.489266   39054 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0314 00:22:49.489280   39054 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0314 00:22:49.489294   39054 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0314 00:22:49.489307   39054 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0314 00:22:49.489320   39054 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0314 00:22:49.489333   39054 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0314 00:22:49.489344   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.489351   39054 command_runner.go:130] > # rdt_config_file = ""
	I0314 00:22:49.489358   39054 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0314 00:22:49.489367   39054 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0314 00:22:49.489407   39054 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0314 00:22:49.489418   39054 command_runner.go:130] > # separate_pull_cgroup = ""
	I0314 00:22:49.489428   39054 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0314 00:22:49.489443   39054 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0314 00:22:49.489451   39054 command_runner.go:130] > # will be added.
	I0314 00:22:49.489462   39054 command_runner.go:130] > # default_capabilities = [
	I0314 00:22:49.489471   39054 command_runner.go:130] > # 	"CHOWN",
	I0314 00:22:49.489480   39054 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0314 00:22:49.489489   39054 command_runner.go:130] > # 	"FSETID",
	I0314 00:22:49.489498   39054 command_runner.go:130] > # 	"FOWNER",
	I0314 00:22:49.489506   39054 command_runner.go:130] > # 	"SETGID",
	I0314 00:22:49.489515   39054 command_runner.go:130] > # 	"SETUID",
	I0314 00:22:49.489522   39054 command_runner.go:130] > # 	"SETPCAP",
	I0314 00:22:49.489526   39054 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0314 00:22:49.489534   39054 command_runner.go:130] > # 	"KILL",
	I0314 00:22:49.489540   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489555   39054 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0314 00:22:49.489570   39054 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0314 00:22:49.489581   39054 command_runner.go:130] > # add_inheritable_capabilities = false
	I0314 00:22:49.489593   39054 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0314 00:22:49.489604   39054 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 00:22:49.489611   39054 command_runner.go:130] > # default_sysctls = [
	I0314 00:22:49.489615   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489622   39054 command_runner.go:130] > # List of devices on the host that a
	I0314 00:22:49.489639   39054 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0314 00:22:49.489657   39054 command_runner.go:130] > # allowed_devices = [
	I0314 00:22:49.489663   39054 command_runner.go:130] > # 	"/dev/fuse",
	I0314 00:22:49.489668   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489676   39054 command_runner.go:130] > # List of additional devices. specified as
	I0314 00:22:49.489687   39054 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0314 00:22:49.489696   39054 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0314 00:22:49.489703   39054 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 00:22:49.489716   39054 command_runner.go:130] > # additional_devices = [
	I0314 00:22:49.489725   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489736   39054 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0314 00:22:49.489751   39054 command_runner.go:130] > # cdi_spec_dirs = [
	I0314 00:22:49.489760   39054 command_runner.go:130] > # 	"/etc/cdi",
	I0314 00:22:49.489766   39054 command_runner.go:130] > # 	"/var/run/cdi",
	I0314 00:22:49.489775   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489788   39054 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0314 00:22:49.489801   39054 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0314 00:22:49.489811   39054 command_runner.go:130] > # Defaults to false.
	I0314 00:22:49.489823   39054 command_runner.go:130] > # device_ownership_from_security_context = false
	I0314 00:22:49.489836   39054 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0314 00:22:49.489850   39054 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0314 00:22:49.489859   39054 command_runner.go:130] > # hooks_dir = [
	I0314 00:22:49.489868   39054 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0314 00:22:49.489873   39054 command_runner.go:130] > # ]
	I0314 00:22:49.489882   39054 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0314 00:22:49.489895   39054 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0314 00:22:49.489907   39054 command_runner.go:130] > # its default mounts from the following two files:
	I0314 00:22:49.489915   39054 command_runner.go:130] > #
	I0314 00:22:49.489926   39054 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0314 00:22:49.489939   39054 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0314 00:22:49.489951   39054 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0314 00:22:49.489957   39054 command_runner.go:130] > #
	I0314 00:22:49.489964   39054 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0314 00:22:49.489976   39054 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0314 00:22:49.489994   39054 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0314 00:22:49.490005   39054 command_runner.go:130] > #      only add mounts it finds in this file.
	I0314 00:22:49.490014   39054 command_runner.go:130] > #
	I0314 00:22:49.490021   39054 command_runner.go:130] > # default_mounts_file = ""
	I0314 00:22:49.490029   39054 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0314 00:22:49.490040   39054 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0314 00:22:49.490046   39054 command_runner.go:130] > pids_limit = 1024
	I0314 00:22:49.490056   39054 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0314 00:22:49.490069   39054 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0314 00:22:49.490082   39054 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0314 00:22:49.490097   39054 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0314 00:22:49.490107   39054 command_runner.go:130] > # log_size_max = -1
	I0314 00:22:49.490120   39054 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0314 00:22:49.490130   39054 command_runner.go:130] > # log_to_journald = false
	I0314 00:22:49.490142   39054 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0314 00:22:49.490153   39054 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0314 00:22:49.490164   39054 command_runner.go:130] > # Path to directory for container attach sockets.
	I0314 00:22:49.490181   39054 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0314 00:22:49.490192   39054 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0314 00:22:49.490202   39054 command_runner.go:130] > # bind_mount_prefix = ""
	I0314 00:22:49.490214   39054 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0314 00:22:49.490221   39054 command_runner.go:130] > # read_only = false
	I0314 00:22:49.490228   39054 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0314 00:22:49.490241   39054 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0314 00:22:49.490251   39054 command_runner.go:130] > # live configuration reload.
	I0314 00:22:49.490258   39054 command_runner.go:130] > # log_level = "info"
	I0314 00:22:49.490270   39054 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0314 00:22:49.490281   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.490289   39054 command_runner.go:130] > # log_filter = ""
	I0314 00:22:49.490301   39054 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0314 00:22:49.490312   39054 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0314 00:22:49.490321   39054 command_runner.go:130] > # separated by comma.
	I0314 00:22:49.490336   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490346   39054 command_runner.go:130] > # uid_mappings = ""
	I0314 00:22:49.490356   39054 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0314 00:22:49.490368   39054 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0314 00:22:49.490378   39054 command_runner.go:130] > # separated by comma.
	I0314 00:22:49.490391   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490398   39054 command_runner.go:130] > # gid_mappings = ""
	I0314 00:22:49.490408   39054 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0314 00:22:49.490422   39054 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 00:22:49.490434   39054 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 00:22:49.490452   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490463   39054 command_runner.go:130] > # minimum_mappable_uid = -1
	I0314 00:22:49.490475   39054 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0314 00:22:49.490483   39054 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 00:22:49.490495   39054 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 00:22:49.490511   39054 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 00:22:49.490521   39054 command_runner.go:130] > # minimum_mappable_gid = -1
	I0314 00:22:49.490533   39054 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0314 00:22:49.490548   39054 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0314 00:22:49.490559   39054 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0314 00:22:49.490567   39054 command_runner.go:130] > # ctr_stop_timeout = 30
	I0314 00:22:49.490579   39054 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0314 00:22:49.490592   39054 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0314 00:22:49.490603   39054 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0314 00:22:49.490615   39054 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0314 00:22:49.490621   39054 command_runner.go:130] > drop_infra_ctr = false
	I0314 00:22:49.490634   39054 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0314 00:22:49.490650   39054 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0314 00:22:49.490659   39054 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0314 00:22:49.490668   39054 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0314 00:22:49.490683   39054 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0314 00:22:49.490696   39054 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0314 00:22:49.490708   39054 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0314 00:22:49.490719   39054 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0314 00:22:49.490727   39054 command_runner.go:130] > # shared_cpuset = ""
	I0314 00:22:49.490738   39054 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0314 00:22:49.490745   39054 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0314 00:22:49.490751   39054 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0314 00:22:49.490779   39054 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0314 00:22:49.490789   39054 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0314 00:22:49.490798   39054 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0314 00:22:49.490808   39054 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0314 00:22:49.490818   39054 command_runner.go:130] > # enable_criu_support = false
	I0314 00:22:49.490826   39054 command_runner.go:130] > # Enable/disable the generation of the container,
	I0314 00:22:49.490836   39054 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0314 00:22:49.490843   39054 command_runner.go:130] > # enable_pod_events = false
	I0314 00:22:49.490853   39054 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 00:22:49.490867   39054 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 00:22:49.490878   39054 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0314 00:22:49.490888   39054 command_runner.go:130] > # default_runtime = "runc"
	I0314 00:22:49.490896   39054 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0314 00:22:49.490909   39054 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0314 00:22:49.490924   39054 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0314 00:22:49.490939   39054 command_runner.go:130] > # creation as a file is not desired either.
	I0314 00:22:49.490955   39054 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0314 00:22:49.490966   39054 command_runner.go:130] > # the hostname is being managed dynamically.
	I0314 00:22:49.490973   39054 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0314 00:22:49.490983   39054 command_runner.go:130] > # ]
	I0314 00:22:49.490999   39054 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0314 00:22:49.491007   39054 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0314 00:22:49.491014   39054 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0314 00:22:49.491021   39054 command_runner.go:130] > # Each entry in the table should follow the format:
	I0314 00:22:49.491025   39054 command_runner.go:130] > #
	I0314 00:22:49.491033   39054 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0314 00:22:49.491041   39054 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0314 00:22:49.491049   39054 command_runner.go:130] > # runtime_type = "oci"
	I0314 00:22:49.491127   39054 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0314 00:22:49.491142   39054 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0314 00:22:49.491149   39054 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0314 00:22:49.491160   39054 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0314 00:22:49.491169   39054 command_runner.go:130] > # monitor_env = []
	I0314 00:22:49.491180   39054 command_runner.go:130] > # privileged_without_host_devices = false
	I0314 00:22:49.491189   39054 command_runner.go:130] > # allowed_annotations = []
	I0314 00:22:49.491198   39054 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0314 00:22:49.491207   39054 command_runner.go:130] > # Where:
	I0314 00:22:49.491219   39054 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0314 00:22:49.491229   39054 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0314 00:22:49.491241   39054 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0314 00:22:49.491252   39054 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0314 00:22:49.491261   39054 command_runner.go:130] > #   in $PATH.
	I0314 00:22:49.491273   39054 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0314 00:22:49.491281   39054 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0314 00:22:49.491290   39054 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0314 00:22:49.491300   39054 command_runner.go:130] > #   state.
	I0314 00:22:49.491313   39054 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0314 00:22:49.491325   39054 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0314 00:22:49.491337   39054 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0314 00:22:49.491348   39054 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0314 00:22:49.491359   39054 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0314 00:22:49.491369   39054 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0314 00:22:49.491395   39054 command_runner.go:130] > #   The currently recognized values are:
	I0314 00:22:49.491411   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0314 00:22:49.491425   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0314 00:22:49.491443   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0314 00:22:49.491452   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0314 00:22:49.491466   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0314 00:22:49.491481   39054 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0314 00:22:49.491491   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0314 00:22:49.491505   39054 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0314 00:22:49.491514   39054 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0314 00:22:49.491523   39054 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0314 00:22:49.491531   39054 command_runner.go:130] > #   deprecated option "conmon".
	I0314 00:22:49.491540   39054 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0314 00:22:49.491551   39054 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0314 00:22:49.491566   39054 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0314 00:22:49.491577   39054 command_runner.go:130] > #   should be moved to the container's cgroup
	I0314 00:22:49.491590   39054 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0314 00:22:49.491601   39054 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0314 00:22:49.491614   39054 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0314 00:22:49.491622   39054 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0314 00:22:49.491629   39054 command_runner.go:130] > #
	I0314 00:22:49.491637   39054 command_runner.go:130] > # Using the seccomp notifier feature:
	I0314 00:22:49.491649   39054 command_runner.go:130] > #
	I0314 00:22:49.491666   39054 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0314 00:22:49.491679   39054 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0314 00:22:49.491687   39054 command_runner.go:130] > #
	I0314 00:22:49.491699   39054 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0314 00:22:49.491708   39054 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0314 00:22:49.491714   39054 command_runner.go:130] > #
	I0314 00:22:49.491724   39054 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0314 00:22:49.491733   39054 command_runner.go:130] > # feature.
	I0314 00:22:49.491741   39054 command_runner.go:130] > #
	I0314 00:22:49.491750   39054 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0314 00:22:49.491762   39054 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0314 00:22:49.491775   39054 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0314 00:22:49.491787   39054 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0314 00:22:49.491799   39054 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0314 00:22:49.491807   39054 command_runner.go:130] > #
	I0314 00:22:49.491817   39054 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0314 00:22:49.491836   39054 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0314 00:22:49.491845   39054 command_runner.go:130] > #
	I0314 00:22:49.491854   39054 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0314 00:22:49.491876   39054 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0314 00:22:49.491883   39054 command_runner.go:130] > #
	I0314 00:22:49.491890   39054 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0314 00:22:49.491903   39054 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0314 00:22:49.491913   39054 command_runner.go:130] > # limitation.
	I0314 00:22:49.491923   39054 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0314 00:22:49.491935   39054 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0314 00:22:49.491940   39054 command_runner.go:130] > runtime_type = "oci"
	I0314 00:22:49.491950   39054 command_runner.go:130] > runtime_root = "/run/runc"
	I0314 00:22:49.491959   39054 command_runner.go:130] > runtime_config_path = ""
	I0314 00:22:49.491968   39054 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0314 00:22:49.491972   39054 command_runner.go:130] > monitor_cgroup = "pod"
	I0314 00:22:49.491981   39054 command_runner.go:130] > monitor_exec_cgroup = ""
	I0314 00:22:49.491991   39054 command_runner.go:130] > monitor_env = [
	I0314 00:22:49.492005   39054 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 00:22:49.492012   39054 command_runner.go:130] > ]
	I0314 00:22:49.492020   39054 command_runner.go:130] > privileged_without_host_devices = false
	I0314 00:22:49.492032   39054 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0314 00:22:49.492043   39054 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0314 00:22:49.492055   39054 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0314 00:22:49.492063   39054 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0314 00:22:49.492078   39054 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0314 00:22:49.492092   39054 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0314 00:22:49.492106   39054 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0314 00:22:49.492118   39054 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0314 00:22:49.492127   39054 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0314 00:22:49.492141   39054 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0314 00:22:49.492147   39054 command_runner.go:130] > # Example:
	I0314 00:22:49.492153   39054 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0314 00:22:49.492164   39054 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0314 00:22:49.492175   39054 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0314 00:22:49.492186   39054 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0314 00:22:49.492194   39054 command_runner.go:130] > # cpuset = 0
	I0314 00:22:49.492208   39054 command_runner.go:130] > # cpushares = "0-1"
	I0314 00:22:49.492213   39054 command_runner.go:130] > # Where:
	I0314 00:22:49.492220   39054 command_runner.go:130] > # The workload name is workload-type.
	I0314 00:22:49.492229   39054 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0314 00:22:49.492234   39054 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0314 00:22:49.492239   39054 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0314 00:22:49.492246   39054 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0314 00:22:49.492253   39054 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0314 00:22:49.492260   39054 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0314 00:22:49.492269   39054 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0314 00:22:49.492275   39054 command_runner.go:130] > # Default value is set to true
	I0314 00:22:49.492282   39054 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0314 00:22:49.492291   39054 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0314 00:22:49.492298   39054 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0314 00:22:49.492305   39054 command_runner.go:130] > # Default value is set to 'false'
	I0314 00:22:49.492312   39054 command_runner.go:130] > # disable_hostport_mapping = false
	I0314 00:22:49.492322   39054 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0314 00:22:49.492327   39054 command_runner.go:130] > #
	I0314 00:22:49.492335   39054 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0314 00:22:49.492343   39054 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0314 00:22:49.492351   39054 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0314 00:22:49.492357   39054 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0314 00:22:49.492362   39054 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0314 00:22:49.492365   39054 command_runner.go:130] > [crio.image]
	I0314 00:22:49.492371   39054 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0314 00:22:49.492375   39054 command_runner.go:130] > # default_transport = "docker://"
	I0314 00:22:49.492380   39054 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0314 00:22:49.492386   39054 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0314 00:22:49.492393   39054 command_runner.go:130] > # global_auth_file = ""
	I0314 00:22:49.492397   39054 command_runner.go:130] > # The image used to instantiate infra containers.
	I0314 00:22:49.492405   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.492409   39054 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0314 00:22:49.492425   39054 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0314 00:22:49.492438   39054 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0314 00:22:49.492450   39054 command_runner.go:130] > # This option supports live configuration reload.
	I0314 00:22:49.492460   39054 command_runner.go:130] > # pause_image_auth_file = ""
	I0314 00:22:49.492480   39054 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0314 00:22:49.492493   39054 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0314 00:22:49.492502   39054 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0314 00:22:49.492507   39054 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0314 00:22:49.492513   39054 command_runner.go:130] > # pause_command = "/pause"
	I0314 00:22:49.492519   39054 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0314 00:22:49.492527   39054 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0314 00:22:49.492532   39054 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0314 00:22:49.492540   39054 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0314 00:22:49.492548   39054 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0314 00:22:49.492556   39054 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0314 00:22:49.492562   39054 command_runner.go:130] > # pinned_images = [
	I0314 00:22:49.492565   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492571   39054 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0314 00:22:49.492577   39054 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0314 00:22:49.492585   39054 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0314 00:22:49.492591   39054 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0314 00:22:49.492599   39054 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0314 00:22:49.492603   39054 command_runner.go:130] > # signature_policy = ""
	I0314 00:22:49.492609   39054 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0314 00:22:49.492616   39054 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0314 00:22:49.492624   39054 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0314 00:22:49.492629   39054 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0314 00:22:49.492637   39054 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0314 00:22:49.492641   39054 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0314 00:22:49.492657   39054 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0314 00:22:49.492671   39054 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0314 00:22:49.492678   39054 command_runner.go:130] > # changing them here.
	I0314 00:22:49.492682   39054 command_runner.go:130] > # insecure_registries = [
	I0314 00:22:49.492688   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492694   39054 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0314 00:22:49.492700   39054 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0314 00:22:49.492705   39054 command_runner.go:130] > # image_volumes = "mkdir"
	I0314 00:22:49.492711   39054 command_runner.go:130] > # Temporary directory to use for storing big files
	I0314 00:22:49.492715   39054 command_runner.go:130] > # big_files_temporary_dir = ""
	I0314 00:22:49.492721   39054 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0314 00:22:49.492734   39054 command_runner.go:130] > # CNI plugins.
	I0314 00:22:49.492740   39054 command_runner.go:130] > [crio.network]
	I0314 00:22:49.492746   39054 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0314 00:22:49.492753   39054 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0314 00:22:49.492757   39054 command_runner.go:130] > # cni_default_network = ""
	I0314 00:22:49.492765   39054 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0314 00:22:49.492770   39054 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0314 00:22:49.492775   39054 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0314 00:22:49.492781   39054 command_runner.go:130] > # plugin_dirs = [
	I0314 00:22:49.492785   39054 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0314 00:22:49.492792   39054 command_runner.go:130] > # ]
	I0314 00:22:49.492798   39054 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0314 00:22:49.492804   39054 command_runner.go:130] > [crio.metrics]
	I0314 00:22:49.492809   39054 command_runner.go:130] > # Globally enable or disable metrics support.
	I0314 00:22:49.492815   39054 command_runner.go:130] > enable_metrics = true
	I0314 00:22:49.492819   39054 command_runner.go:130] > # Specify enabled metrics collectors.
	I0314 00:22:49.492825   39054 command_runner.go:130] > # Per default all metrics are enabled.
	I0314 00:22:49.492833   39054 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0314 00:22:49.492841   39054 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0314 00:22:49.492849   39054 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0314 00:22:49.492855   39054 command_runner.go:130] > # metrics_collectors = [
	I0314 00:22:49.492859   39054 command_runner.go:130] > # 	"operations",
	I0314 00:22:49.492866   39054 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0314 00:22:49.492871   39054 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0314 00:22:49.492877   39054 command_runner.go:130] > # 	"operations_errors",
	I0314 00:22:49.492881   39054 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0314 00:22:49.492884   39054 command_runner.go:130] > # 	"image_pulls_by_name",
	I0314 00:22:49.492891   39054 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0314 00:22:49.492895   39054 command_runner.go:130] > # 	"image_pulls_failures",
	I0314 00:22:49.492901   39054 command_runner.go:130] > # 	"image_pulls_successes",
	I0314 00:22:49.492905   39054 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0314 00:22:49.492911   39054 command_runner.go:130] > # 	"image_layer_reuse",
	I0314 00:22:49.492916   39054 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0314 00:22:49.492922   39054 command_runner.go:130] > # 	"containers_oom_total",
	I0314 00:22:49.492925   39054 command_runner.go:130] > # 	"containers_oom",
	I0314 00:22:49.492931   39054 command_runner.go:130] > # 	"processes_defunct",
	I0314 00:22:49.492939   39054 command_runner.go:130] > # 	"operations_total",
	I0314 00:22:49.492946   39054 command_runner.go:130] > # 	"operations_latency_seconds",
	I0314 00:22:49.492950   39054 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0314 00:22:49.492957   39054 command_runner.go:130] > # 	"operations_errors_total",
	I0314 00:22:49.492960   39054 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0314 00:22:49.492967   39054 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0314 00:22:49.492971   39054 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0314 00:22:49.492975   39054 command_runner.go:130] > # 	"image_pulls_success_total",
	I0314 00:22:49.492980   39054 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0314 00:22:49.492984   39054 command_runner.go:130] > # 	"containers_oom_count_total",
	I0314 00:22:49.492992   39054 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0314 00:22:49.492996   39054 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0314 00:22:49.492999   39054 command_runner.go:130] > # ]
	I0314 00:22:49.493004   39054 command_runner.go:130] > # The port on which the metrics server will listen.
	I0314 00:22:49.493009   39054 command_runner.go:130] > # metrics_port = 9090
	I0314 00:22:49.493013   39054 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0314 00:22:49.493019   39054 command_runner.go:130] > # metrics_socket = ""
	I0314 00:22:49.493024   39054 command_runner.go:130] > # The certificate for the secure metrics server.
	I0314 00:22:49.493031   39054 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0314 00:22:49.493044   39054 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0314 00:22:49.493055   39054 command_runner.go:130] > # certificate on any modification event.
	I0314 00:22:49.493063   39054 command_runner.go:130] > # metrics_cert = ""
	I0314 00:22:49.493068   39054 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0314 00:22:49.493076   39054 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0314 00:22:49.493080   39054 command_runner.go:130] > # metrics_key = ""
	I0314 00:22:49.493088   39054 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0314 00:22:49.493091   39054 command_runner.go:130] > [crio.tracing]
	I0314 00:22:49.493097   39054 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0314 00:22:49.493104   39054 command_runner.go:130] > # enable_tracing = false
	I0314 00:22:49.493109   39054 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0314 00:22:49.493115   39054 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0314 00:22:49.493121   39054 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0314 00:22:49.493129   39054 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0314 00:22:49.493136   39054 command_runner.go:130] > # CRI-O NRI configuration.
	I0314 00:22:49.493139   39054 command_runner.go:130] > [crio.nri]
	I0314 00:22:49.493144   39054 command_runner.go:130] > # Globally enable or disable NRI.
	I0314 00:22:49.493154   39054 command_runner.go:130] > # enable_nri = false
	I0314 00:22:49.493161   39054 command_runner.go:130] > # NRI socket to listen on.
	I0314 00:22:49.493165   39054 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0314 00:22:49.493171   39054 command_runner.go:130] > # NRI plugin directory to use.
	I0314 00:22:49.493176   39054 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0314 00:22:49.493183   39054 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0314 00:22:49.493187   39054 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0314 00:22:49.493194   39054 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0314 00:22:49.493199   39054 command_runner.go:130] > # nri_disable_connections = false
	I0314 00:22:49.493206   39054 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0314 00:22:49.493210   39054 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0314 00:22:49.493217   39054 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0314 00:22:49.493224   39054 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0314 00:22:49.493229   39054 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0314 00:22:49.493232   39054 command_runner.go:130] > [crio.stats]
	I0314 00:22:49.493240   39054 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0314 00:22:49.493248   39054 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0314 00:22:49.493254   39054 command_runner.go:130] > # stats_collection_period = 0
	I0314 00:22:49.493416   39054 cni.go:84] Creating CNI manager for ""
	I0314 00:22:49.493431   39054 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 00:22:49.493440   39054 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:22:49.493458   39054 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-507871 NodeName:multinode-507871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:22:49.493601   39054 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-507871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:22:49.493664   39054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:22:49.504269   39054 command_runner.go:130] > kubeadm
	I0314 00:22:49.504291   39054 command_runner.go:130] > kubectl
	I0314 00:22:49.504295   39054 command_runner.go:130] > kubelet
	I0314 00:22:49.504314   39054 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:22:49.504365   39054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:22:49.514811   39054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0314 00:22:49.532955   39054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:22:49.550989   39054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0314 00:22:49.569379   39054 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0314 00:22:49.573256   39054 command_runner.go:130] > 192.168.39.60	control-plane.minikube.internal
	I0314 00:22:49.573471   39054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:22:49.716887   39054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:22:49.734187   39054 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871 for IP: 192.168.39.60
	I0314 00:22:49.734217   39054 certs.go:194] generating shared ca certs ...
	I0314 00:22:49.734238   39054 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:22:49.734439   39054 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:22:49.734509   39054 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:22:49.734521   39054 certs.go:256] generating profile certs ...
	I0314 00:22:49.734604   39054 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/client.key
	I0314 00:22:49.734661   39054 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key.3aa17428
	I0314 00:22:49.734694   39054 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key
	I0314 00:22:49.734704   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 00:22:49.734715   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 00:22:49.734730   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 00:22:49.734740   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 00:22:49.734758   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 00:22:49.734795   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 00:22:49.734812   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 00:22:49.734822   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 00:22:49.734868   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:22:49.734903   39054 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:22:49.734912   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:22:49.734940   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:22:49.734961   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:22:49.734983   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:22:49.735018   39054 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:22:49.735049   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> /usr/share/ca-certificates/122682.pem
	I0314 00:22:49.735062   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:49.735074   39054 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem -> /usr/share/ca-certificates/12268.pem
	I0314 00:22:49.735647   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:22:49.763664   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:22:49.789319   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:22:49.815846   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:22:49.842525   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:22:49.869050   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:22:49.894409   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:22:49.920905   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/multinode-507871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:22:49.946665   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:22:49.971541   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:22:49.997240   39054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:22:50.023614   39054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:22:50.041138   39054 ssh_runner.go:195] Run: openssl version
	I0314 00:22:50.046923   39054 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 00:22:50.047188   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:22:50.058420   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063192   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063218   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.063260   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:22:50.069297   39054 command_runner.go:130] > 3ec20f2e
	I0314 00:22:50.069395   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:22:50.079812   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:22:50.092032   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096821   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096918   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.096980   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:22:50.103212   39054 command_runner.go:130] > b5213941
	I0314 00:22:50.103409   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:22:50.113576   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:22:50.124948   39054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129861   39054 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129888   39054 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.129922   39054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:22:50.135935   39054 command_runner.go:130] > 51391683
	I0314 00:22:50.135989   39054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:22:50.145774   39054 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:22:50.150303   39054 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:22:50.150335   39054 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 00:22:50.150344   39054 command_runner.go:130] > Device: 253,1	Inode: 7338557     Links: 1
	I0314 00:22:50.150356   39054 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 00:22:50.150364   39054 command_runner.go:130] > Access: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150371   39054 command_runner.go:130] > Modify: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150376   39054 command_runner.go:130] > Change: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150383   39054 command_runner.go:130] >  Birth: 2024-03-14 00:16:28.216058597 +0000
	I0314 00:22:50.150434   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:22:50.156271   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.156335   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:22:50.162013   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.162073   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:22:50.167858   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.167957   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:22:50.173473   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.173544   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:22:50.179104   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.179157   39054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:22:50.184895   39054 command_runner.go:130] > Certificate will not expire
	I0314 00:22:50.184970   39054 kubeadm.go:391] StartCluster: {Name:multinode-507871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-507871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:22:50.185074   39054 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:22:50.185111   39054 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:22:50.223815   39054 command_runner.go:130] > 0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530
	I0314 00:22:50.223839   39054 command_runner.go:130] > d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d
	I0314 00:22:50.223848   39054 command_runner.go:130] > 9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c
	I0314 00:22:50.223856   39054 command_runner.go:130] > 43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd
	I0314 00:22:50.223862   39054 command_runner.go:130] > 132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a
	I0314 00:22:50.223870   39054 command_runner.go:130] > 97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d
	I0314 00:22:50.223878   39054 command_runner.go:130] > 6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54
	I0314 00:22:50.223904   39054 command_runner.go:130] > ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6
	I0314 00:22:50.223933   39054 cri.go:89] found id: "0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530"
	I0314 00:22:50.223944   39054 cri.go:89] found id: "d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d"
	I0314 00:22:50.223950   39054 cri.go:89] found id: "9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c"
	I0314 00:22:50.223956   39054 cri.go:89] found id: "43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd"
	I0314 00:22:50.223961   39054 cri.go:89] found id: "132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a"
	I0314 00:22:50.223975   39054 cri.go:89] found id: "97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d"
	I0314 00:22:50.223983   39054 cri.go:89] found id: "6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54"
	I0314 00:22:50.223988   39054 cri.go:89] found id: "ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6"
	I0314 00:22:50.223996   39054 cri.go:89] found id: ""
	I0314 00:22:50.224060   39054 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.742307450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376003742282872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c0fb4b4-c046-4bf7-bee5-996ec60590e5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.743043867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f0a6322-e431-48cc-9140-362cad517068 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.743102840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f0a6322-e431-48cc-9140-362cad517068 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.743460961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f0a6322-e431-48cc-9140-362cad517068 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.785840476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc2e08d9-a3a8-412d-a019-5828c92f0116 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.785937124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc2e08d9-a3a8-412d-a019-5828c92f0116 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.787913071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2083db75-f847-42ba-a21c-f358cf726321 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.788348546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376003788322653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2083db75-f847-42ba-a21c-f358cf726321 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.788990169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36bd9d2b-cf4a-448a-8262-109c19563552 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.789067302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36bd9d2b-cf4a-448a-8262-109c19563552 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.789491240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36bd9d2b-cf4a-448a-8262-109c19563552 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.838365771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=459b8196-469a-478e-a945-81dd5b4f9818 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.838440205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=459b8196-469a-478e-a945-81dd5b4f9818 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.839889678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab9d90de-3962-4a61-bcdb-1523ba79993e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.840285504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376003840262924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab9d90de-3962-4a61-bcdb-1523ba79993e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.840891919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ba02ba5-ba85-4d39-8b2c-084465a5e76c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.840943848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ba02ba5-ba85-4d39-8b2c-084465a5e76c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.841325475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ba02ba5-ba85-4d39-8b2c-084465a5e76c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.883915908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9e3c585-60e2-431e-9010-9ed1ff5c9dda name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.883994888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9e3c585-60e2-431e-9010-9ed1ff5c9dda name=/runtime.v1.RuntimeService/Version
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.885102604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2211248-4655-43ce-a5c4-a82dc726059a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.885720871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376003885696979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2211248-4655-43ce-a5c4-a82dc726059a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.886286277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f327910-bcd2-4e8d-90b3-266748f3c12a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.886340759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f327910-bcd2-4e8d-90b3-266748f3c12a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:26:43 multinode-507871 crio[2844]: time="2024-03-14 00:26:43.886780712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0e31ec78a89c7ca0a9dc73aecdbd9cecfc964c835242668df95acda77eebe7f,PodSandboxId:6a7e85e12aeea0b297c16c5f8cb28db0705158d10119d41c2b9687be6a3db15f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710375804340145852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0,PodSandboxId:b660159f1e6bad61d5859be6ca05cb84ba658c0bc178844e947da3d32776d2df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710375777865376188,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f,PodSandboxId:c53c270508e37eef583606a5f5af0c9c93cedf7e2b009c9c1acd2355a418cacc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710375777923072299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42
db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f3416cc9a6c63c165d8946729162d870c9de2a158523c63ebe4985c4558cb3,PodSandboxId:bf18cc0318910abf10572dc172b9506ac44b92ea10d498f73763e0d9029f3181,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710375777905011177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},
Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d,PodSandboxId:4b576567650ab391dd934dca98a5d22e0a3dbba511bd776e6306e25b172a5d31,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710375777858009789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.k
ubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed,PodSandboxId:be969458a98b88a08235e83dde24ddb59baad5ef23e0852df618de666bba4218,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710375773209805119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]string{io.kubernetes.container.hash: b52bba76,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf,PodSandboxId:b800f24e3af22f9fe36fcb5c368088d27ab290dcdd48d0de6e68dc2e10cabd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710375773184094945,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a,PodSandboxId:85e1d34ad57fde8ea8f48b2bb3e0e6dbafbeae731fc366b54540ad9d8779e420,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710375770797020070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e82f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce,PodSandboxId:7446a14c719bfa8c2dbcb12be65788d3e6de7ecd7c886aac81c15cf3b451e269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710375770781461120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23110e04e7259631abff4d15fef0a37e07abe5fc2499677309dde0fd98d1b13c,PodSandboxId:9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710375463554062812,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-vrskm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa6241da-7a44-4dd4-b00a-b3a008151fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b24782fd,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a403f7ce3b87d14b128e411c8722cedf7587963b81e4f27ef0697b909f30530,PodSandboxId:a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710375416589118541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de44d7b-d708-4151-a9d2-331fe7733508,},Annotations:map[string]string{io.kubernetes.container.hash: 9e147308,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d,PodSandboxId:b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710375416550266438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vlnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53e3b884-181c-4bbd-a913-dc0e653a6049,},Annotations:map[string]string{io.kubernetes.container.hash: ffa5906f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c,PodSandboxId:8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710375414768253541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lwzg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 6d2baacd-f40a-400c-b587-a4be4745ee78,},Annotations:map[string]string{io.kubernetes.container.hash: 295a4ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd,PodSandboxId:2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710375411679673553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vlzf2,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 96e17dd5-4e30-48aa-8f37-e42db89652da,},Annotations:map[string]string{io.kubernetes.container.hash: 4d922f59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a,PodSandboxId:c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710375391459514429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: d3269078ba2f0710950742881b1ad45f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d,PodSandboxId:c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710375391443444130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f25c10b051e8
2f6e13ba3c3d00847e1,},Annotations:map[string]string{io.kubernetes.container.hash: c540f8da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54,PodSandboxId:8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710375391413498509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57cfd10e401230b197eb5cbd3693e85,},Annotations:map[string]
string{io.kubernetes.container.hash: b52bba76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6,PodSandboxId:181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710375391370912167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-507871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6acb99e901a4d3e69f051bbe79cf00c,},Annotations
:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f327910-bcd2-4e8d-90b3-266748f3c12a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0e31ec78a89c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6a7e85e12aeea       busybox-5b5d89c9d6-vrskm
	60b8bdb869593       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   c53c270508e37       kube-proxy-vlzf2
	94f3416cc9a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   bf18cc0318910       storage-provisioner
	1d9b9ad83c74d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   b660159f1e6ba       coredns-5dd5756b68-9vlnk
	e283bf2d8cdb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   4b576567650ab       kindnet-4lwzg
	196ecd411466b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   be969458a98b8       etcd-multinode-507871
	e95d69e88eba0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   b800f24e3af22       kube-scheduler-multinode-507871
	c2a96cc8a747b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   85e1d34ad57fd       kube-apiserver-multinode-507871
	b50238b896e9f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   7446a14c719bf       kube-controller-manager-multinode-507871
	23110e04e7259       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   9cce3068be90d       busybox-5b5d89c9d6-vrskm
	0a403f7ce3b87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   a6027e6899044       storage-provisioner
	d7ab912de31f7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   b2755b1eac52d       coredns-5dd5756b68-9vlnk
	9c102f868585b       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   8c116aa0b59e7       kindnet-4lwzg
	43cac9d56e995       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   2af054c67bb56       kube-proxy-vlzf2
	132b3767fdc0f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   c239bd48e2013       kube-scheduler-multinode-507871
	97f09dd3764f0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   c2de6818ded13       kube-apiserver-multinode-507871
	6318489143ee0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   8548b21c08449       etcd-multinode-507871
	ababf7bade675       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   181b46e7a6ab1       kube-controller-manager-multinode-507871
	
	
	==> coredns [1d9b9ad83c74dbb1c87f440444c53ca9bc54169233d2a5506e8cce0cc78dd1f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42876 - 49648 "HINFO IN 3006261507411900213.5578216792479905689. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015401558s
	
	
	==> coredns [d7ab912de31f7f01362e446bece9ec16e82499aa78c614dbdc345030a7a1a20d] <==
	[INFO] 10.244.0.3:37575 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002024866s
	[INFO] 10.244.0.3:36577 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082358s
	[INFO] 10.244.0.3:47842 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084657s
	[INFO] 10.244.0.3:51068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001376723s
	[INFO] 10.244.0.3:56425 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043467s
	[INFO] 10.244.0.3:37459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084063s
	[INFO] 10.244.0.3:47995 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041054s
	[INFO] 10.244.1.2:45681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140092s
	[INFO] 10.244.1.2:45576 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113922s
	[INFO] 10.244.1.2:46653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009706s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074005s
	[INFO] 10.244.0.3:41435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127009s
	[INFO] 10.244.0.3:38580 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090375s
	[INFO] 10.244.0.3:46807 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012191s
	[INFO] 10.244.0.3:45912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005798s
	[INFO] 10.244.1.2:48361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138669s
	[INFO] 10.244.1.2:49087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014347s
	[INFO] 10.244.1.2:49674 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000219815s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213717s
	[INFO] 10.244.0.3:46614 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083328s
	[INFO] 10.244.0.3:52240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048161s
	[INFO] 10.244.0.3:52273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049137s
	[INFO] 10.244.0.3:45527 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042531s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-507871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-507871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=multinode-507871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_16_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-507871
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:26:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:22:56 +0000   Thu, 14 Mar 2024 00:16:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-507871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8dcd38c23ae04b89b9efc07e56cd47fa
	  System UUID:                8dcd38c2-3ae0-4b89-b9ef-c07e56cd47fa
	  Boot ID:                    0ae90c06-f75b-4c13-8ad2-654634eab994
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-vrskm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	  kube-system                 coredns-5dd5756b68-9vlnk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m54s
	  kube-system                 etcd-multinode-507871                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-4lwzg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-apiserver-multinode-507871             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-507871    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-vlzf2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-scheduler-multinode-507871             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x4 over 10m)      kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x4 over 10m)      kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m55s                  node-controller  Node multinode-507871 event: Registered Node multinode-507871 in Controller
	  Normal  NodeReady                9m49s                  kubelet          Node multinode-507871 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-507871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-507871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-507871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node multinode-507871 event: Registered Node multinode-507871 in Controller
	
	
	Name:               multinode-507871-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-507871-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=multinode-507871
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T00_23_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:23:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-507871-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:24:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:25:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:25:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:25:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 00:24:10 +0000   Thu, 14 Mar 2024 00:25:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-507871-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9694808feb3441d0b5592744303f7626
	  System UUID:                9694808f-eb34-41d0-b559-2744303f7626
	  Boot ID:                    ea3d3811-6406-4c53-aa7b-d3b5cee45955
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-6624j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-jzhqr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m15s
	  kube-system                 kube-proxy-lpvtz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m15s (x5 over 9m17s)  kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x5 over 9m17s)  kubelet          Node multinode-507871-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x5 over 9m17s)  kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m6s                   kubelet          Node multinode-507871-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m5s (x5 over 3m6s)    kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x5 over 3m6s)    kubelet          Node multinode-507871-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x5 over 3m6s)    kubelet          Node multinode-507871-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                     node-controller  Node multinode-507871-m02 event: Registered Node multinode-507871-m02 in Controller
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-507871-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-507871-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +11.519653] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.139947] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198635] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.117799] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.239925] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.811738] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.063301] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.711288] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +1.292204] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.982925] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[ +12.781995] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +0.097051] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.039279] kauditd_printk_skb: 56 callbacks suppressed
	[Mar14 00:17] kauditd_printk_skb: 16 callbacks suppressed
	[Mar14 00:22] systemd-fstab-generator[2765]: Ignoring "noauto" option for root device
	[  +0.151000] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.197757] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.151293] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.250614] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.772094] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +2.661608] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +1.079584] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.307776] kauditd_printk_skb: 20 callbacks suppressed
	[Mar14 00:23] systemd-fstab-generator[3888]: Ignoring "noauto" option for root device
	[ +11.474700] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [196ecd411466bf39c3aa3952e43cd1133499fe59029dc94d2cfa4061a1e763ed] <==
	{"level":"info","ts":"2024-03-14T00:22:53.91347Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:22:53.913498Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:22:53.914032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250)"}
	{"level":"info","ts":"2024-03-14T00:22:53.918675Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","added-peer-id":"1a622f206f99396a","added-peer-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2024-03-14T00:22:53.91905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:22:53.919109Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:22:53.950863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:22:53.951002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:22:53.951151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:22:53.952291Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T00:22:53.952225Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1a622f206f99396a","initial-advertise-peer-urls":["https://192.168.39.60:2380"],"listen-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:22:55.24465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2024-03-14T00:22:55.244835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.244924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-03-14T00:22:55.250482Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:multinode-507871 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:22:55.250883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:22:55.250899Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:22:55.251015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T00:22:55.251116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:22:55.252366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:22:55.252485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.60:2379"}
	
	
	==> etcd [6318489143ee06c76a22aae895ed745c0918737b90a5e101cef6cee51b2c2e54] <==
	WARNING: 2024/03/14 00:16:37 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-14T00:18:18.830082Z","caller":"traceutil/trace.go:171","msg":"trace[968168768] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:633; }","duration":"130.171957ms","start":"2024-03-14T00:18:18.699876Z","end":"2024-03-14T00:18:18.830048Z","steps":["trace[968168768] 'read index received'  (duration: 130.030273ms)","trace[968168768] 'applied index is now lower than readState.Index'  (duration: 141.044µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:18:18.830638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.645101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:18.830713Z","caller":"traceutil/trace.go:171","msg":"trace[33776748] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:603; }","duration":"130.876422ms","start":"2024-03-14T00:18:18.69983Z","end":"2024-03-14T00:18:18.830706Z","steps":["trace[33776748] 'agreement among raft nodes before linearized reading'  (duration: 130.623497ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T00:18:18.83042Z","caller":"traceutil/trace.go:171","msg":"trace[826385628] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"155.236177ms","start":"2024-03-14T00:18:18.675165Z","end":"2024-03-14T00:18:18.830402Z","steps":["trace[826385628] 'process raft request'  (duration: 154.732215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.062011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.073154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-gxf88\" ","response":"range_response_count:1 size:3440"}
	{"level":"info","ts":"2024-03-14T00:18:19.062212Z","caller":"traceutil/trace.go:171","msg":"trace[969139056] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-gxf88; range_end:; response_count:1; response_revision:603; }","duration":"223.304991ms","start":"2024-03-14T00:18:18.838886Z","end":"2024-03-14T00:18:19.062191Z","steps":["trace[969139056] 'range keys from in-memory index tree'  (duration: 222.827636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.06205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.298451ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:19.062505Z","caller":"traceutil/trace.go:171","msg":"trace[744042170] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:603; }","duration":"221.759265ms","start":"2024-03-14T00:18:18.840732Z","end":"2024-03-14T00:18:19.062492Z","steps":["trace[744042170] 'range keys from in-memory index tree'  (duration: 221.275267ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T00:18:19.288467Z","caller":"traceutil/trace.go:171","msg":"trace[778292276] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:634; }","duration":"212.929453ms","start":"2024-03-14T00:18:19.075519Z","end":"2024-03-14T00:18:19.288449Z","steps":["trace[778292276] 'read index received'  (duration: 212.718203ms)","trace[778292276] 'applied index is now lower than readState.Index'  (duration: 210.531µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:18:19.288756Z","caller":"traceutil/trace.go:171","msg":"trace[1946803402] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"215.921159ms","start":"2024-03-14T00:18:19.07282Z","end":"2024-03-14T00:18:19.288741Z","steps":["trace[1946803402] 'process raft request'  (duration: 215.475482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.289062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.534264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:18:19.289146Z","caller":"traceutil/trace.go:171","msg":"trace[1611085147] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:604; }","duration":"213.634544ms","start":"2024-03-14T00:18:19.075495Z","end":"2024-03-14T00:18:19.289129Z","steps":["trace[1611085147] 'agreement among raft nodes before linearized reading'  (duration: 213.416044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:18:19.553441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.340397ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4137275588816839247 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:602 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:18:19.553643Z","caller":"traceutil/trace.go:171","msg":"trace[765541066] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"253.331405ms","start":"2024-03-14T00:18:19.3003Z","end":"2024-03-14T00:18:19.553631Z","steps":["trace[765541066] 'process raft request'  (duration: 83.019346ms)","trace[765541066] 'compare'  (duration: 169.141915ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:21:16.688792Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T00:21:16.688999Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-507871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	{"level":"warn","ts":"2024-03-14T00:21:16.689196Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.68928Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.762207Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:21:16.762309Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T00:21:16.762359Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1a622f206f99396a","current-leader-member-id":"1a622f206f99396a"}
	{"level":"info","ts":"2024-03-14T00:21:16.765223Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:21:16.765467Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-03-14T00:21:16.765634Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-507871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	
	
	==> kernel <==
	 00:26:44 up 10 min,  0 users,  load average: 0.38, 0.33, 0.19
	Linux multinode-507871 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9c102f868585b7b3a9473445f469919e0e6ead84e81ad5d8530fabdfbd16969c] <==
	I0314 00:20:35.909696       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:20:45.916438       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:20:45.916491       1 main.go:227] handling current node
	I0314 00:20:45.916502       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:20:45.916508       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:20:45.916682       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:20:45.916690       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:20:55.922906       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:20:55.922951       1 main.go:227] handling current node
	I0314 00:20:55.922976       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:20:55.922982       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:20:55.923114       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:20:55.923143       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:21:05.937775       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:21:05.937908       1 main.go:227] handling current node
	I0314 00:21:05.937938       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:21:05.937965       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:21:05.938101       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:21:05.938123       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	I0314 00:21:15.951443       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:21:15.951471       1 main.go:227] handling current node
	I0314 00:21:15.951494       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:21:15.951499       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:21:15.951802       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0314 00:21:15.951814       1 main.go:250] Node multinode-507871-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e283bf2d8cdb9edb01135144d5c43358db6115da0c75fd160e6555e8f5daf24d] <==
	I0314 00:25:38.842801       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:25:48.848157       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:25:48.848203       1 main.go:227] handling current node
	I0314 00:25:48.848221       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:25:48.848228       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:25:58.882823       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:25:58.882869       1 main.go:227] handling current node
	I0314 00:25:58.882879       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:25:58.882885       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:26:08.888624       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:26:08.888785       1 main.go:227] handling current node
	I0314 00:26:08.888823       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:26:08.888843       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:26:18.979503       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:26:18.979736       1 main.go:227] handling current node
	I0314 00:26:18.979763       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:26:18.979783       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:26:28.992397       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:26:28.992523       1 main.go:227] handling current node
	I0314 00:26:28.992621       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:26:28.992650       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	I0314 00:26:39.003406       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0314 00:26:39.003708       1 main.go:227] handling current node
	I0314 00:26:39.003811       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0314 00:26:39.003872       1 main.go:250] Node multinode-507871-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [97f09dd3764f0f3583729e4f8d85c01ccfe809a991df11d63266b6baf1449f4d] <==
	E0314 00:21:16.716293       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716354       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716413       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716446       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716513       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.716711       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717083       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717160       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717218       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0314 00:21:16.717311       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0314 00:21:16.717476       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 00:21:16.718131       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0314 00:21:16.718905       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0314 00:21:16.719076       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0314 00:21:16.719126       1 controller.go:129] Ending legacy_token_tracking_controller
	I0314 00:21:16.719157       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0314 00:21:16.719192       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0314 00:21:16.719226       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0314 00:21:16.719270       1 available_controller.go:439] Shutting down AvailableConditionController
	I0314 00:21:16.719330       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0314 00:21:16.719625       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0314 00:21:16.719668       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0314 00:21:16.720010       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0314 00:21:16.720081       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0314 00:21:16.720134       1 naming_controller.go:302] Shutting down NamingConditionController
	
	
	==> kube-apiserver [c2a96cc8a747b9e9ba5ceedd453e23e200c916661c3a2d3ce413b5e35bab257a] <==
	I0314 00:22:56.696815       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 00:22:56.697080       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 00:22:56.697112       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 00:22:56.725303       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 00:22:56.751035       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 00:22:56.751517       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 00:22:56.765434       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 00:22:56.751530       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 00:22:56.751737       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 00:22:56.751746       1 shared_informer.go:318] Caches are synced for configmaps
	E0314 00:22:56.776774       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0314 00:22:56.801728       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 00:22:56.801906       1 aggregator.go:166] initial CRD sync complete...
	I0314 00:22:56.801945       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 00:22:56.801952       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 00:22:56.801958       1 cache.go:39] Caches are synced for autoregister controller
	I0314 00:22:56.807290       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 00:22:57.660299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 00:22:59.152617       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 00:22:59.272718       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 00:22:59.282992       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 00:22:59.364206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 00:22:59.373798       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 00:23:09.166122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 00:23:09.174239       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ababf7bade675f2692cbc0b94152582f6dcc6519525a82b73e773ad3b95a3ab6] <==
	I0314 00:17:43.975807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="130.014µs"
	I0314 00:17:44.274486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.84759ms"
	I0314 00:17:44.274889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="152.138µs"
	I0314 00:18:16.199309       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:18:16.199772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:18:16.211191       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.2.0/24"]
	I0314 00:18:16.228911       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ffqpb"
	I0314 00:18:16.229014       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gxf88"
	I0314 00:18:19.599699       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-507871-m03"
	I0314 00:18:19.599907       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-507871-m03 event: Registered Node multinode-507871-m03 in Controller"
	I0314 00:18:27.151974       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:00.843229       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:03.537719       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:03.538744       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:19:03.567862       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.3.0/24"]
	I0314 00:19:11.033268       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:19:54.657828       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-507871-m03 status is now: NodeNotReady"
	I0314 00:19:54.662152       1 event.go:307] "Event occurred" object="multinode-507871-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-507871-m02 status is now: NodeNotReady"
	I0314 00:19:54.673805       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-gxf88" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.684697       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lpvtz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.693866       1 event.go:307] "Event occurred" object="kube-system/kindnet-ffqpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.698429       1 event.go:307] "Event occurred" object="kube-system/kindnet-jzhqr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.719455       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-498th" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:19:54.738215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.254429ms"
	I0314 00:19:54.738369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.551µs"
	
	
	==> kube-controller-manager [b50238b896e9f2bcc7319f3dcc94aaaadd48bf6141505c5b97bd081ce899e2ce] <==
	I0314 00:23:47.034376       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:23:47.055204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.924µs"
	I0314 00:23:47.073463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105.476µs"
	I0314 00:23:49.239529       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-6624j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-6624j"
	I0314 00:23:50.910186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.894089ms"
	I0314 00:23:50.910339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.599µs"
	I0314 00:24:06.811953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:09.243151       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-507871-m03 event: Removing Node multinode-507871-m03 from Controller"
	I0314 00:24:09.374763       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-507871-m03\" does not exist"
	I0314 00:24:09.375961       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:09.391087       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-507871-m03" podCIDRs=["10.244.2.0/24"]
	I0314 00:24:14.244613       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-507871-m03 event: Registered Node multinode-507871-m03 in Controller"
	I0314 00:24:17.008537       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:22.426338       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-507871-m02"
	I0314 00:24:24.259126       1 event.go:307] "Event occurred" object="multinode-507871-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-507871-m03 event: Removing Node multinode-507871-m03 from Controller"
	I0314 00:25:04.280263       1 event.go:307] "Event occurred" object="multinode-507871-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-507871-m02 status is now: NodeNotReady"
	I0314 00:25:04.293714       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-6624j" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:25:04.307604       1 event.go:307] "Event occurred" object="kube-system/kindnet-jzhqr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:25:04.307687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.025649ms"
	I0314 00:25:04.307791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.473µs"
	I0314 00:25:04.319080       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lpvtz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 00:25:09.243932       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-gxf88"
	I0314 00:25:09.269733       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-gxf88"
	I0314 00:25:09.269778       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-ffqpb"
	I0314 00:25:09.296944       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-ffqpb"
	
	
	==> kube-proxy [43cac9d56e9954fe3fecef5d956cd1f5aca82518b1a25227900c06316902b6cd] <==
	I0314 00:16:51.887205       1 server_others.go:69] "Using iptables proxy"
	I0314 00:16:51.904828       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0314 00:16:51.953679       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:16:51.953701       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:16:51.956098       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:16:51.956409       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:16:51.956822       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:16:51.956869       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:16:51.958871       1 config.go:188] "Starting service config controller"
	I0314 00:16:51.959182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:16:51.959304       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:16:51.959358       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:16:51.962650       1 config.go:315] "Starting node config controller"
	I0314 00:16:51.962692       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:16:52.059478       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:16:52.059728       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:16:52.063753       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [60b8bdb8695937d178aa2167246ec21bbd86d93e0093bebaa5a1a92d060e682f] <==
	I0314 00:22:58.266816       1 server_others.go:69] "Using iptables proxy"
	I0314 00:22:58.278899       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0314 00:22:58.349690       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:22:58.349743       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:22:58.354881       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:22:58.354988       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:22:58.355201       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:22:58.355233       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:22:58.357233       1 config.go:188] "Starting service config controller"
	I0314 00:22:58.357277       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:22:58.357301       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:22:58.357305       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:22:58.357910       1 config.go:315] "Starting node config controller"
	I0314 00:22:58.357940       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:22:58.458089       1 shared_informer.go:318] Caches are synced for node config
	I0314 00:22:58.458149       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:22:58.458173       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [132b3767fdc0f7f1e2bc85f2a43c23e181ae105c607486a4a80938ca6fd6033a] <==
	E0314 00:16:34.340017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 00:16:35.213867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 00:16:35.213918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 00:16:35.282065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 00:16:35.282197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 00:16:35.321892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 00:16:35.322029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 00:16:35.396382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 00:16:35.396820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 00:16:35.580946       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 00:16:35.581114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 00:16:35.593266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 00:16:35.593311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 00:16:35.598785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 00:16:35.598943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 00:16:35.602849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 00:16:35.602951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 00:16:35.640062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 00:16:35.640111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 00:16:35.850436       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 00:16:35.850496       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 00:16:38.929268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:21:16.693186       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 00:21:16.693295       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0314 00:21:16.711514       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e95d69e88eba0a0dabfa6b362cc39e3aa3b1d859ae4dc2ba0bf2ed712cf64daf] <==
	I0314 00:22:53.982622       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:22:56.708065       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:22:56.709502       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:22:56.709707       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:22:56.709743       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:22:56.740101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:22:56.740152       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:22:56.741488       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:22:56.741644       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:22:56.742100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:22:56.742233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:22:56.845291       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 00:24:52 multinode-507871 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:24:52 multinode-507871 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.647832    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod8de44d7b-d708-4151-a9d2-331fe7733508/crio-a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Error finding container a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Status 404 returned error can't find the container with id a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.648240    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda57cfd10e401230b197eb5cbd3693e85/crio-8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Error finding container 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Status 404 returned error can't find the container with id 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.648615    3408 manager.go:1106] Failed to create existing container: /kubepods/pod6d2baacd-f40a-400c-b587-a4be4745ee78/crio-8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Error finding container 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Status 404 returned error can't find the container with id 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.648901    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd3269078ba2f0710950742881b1ad45f/crio-c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Error finding container c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Status 404 returned error can't find the container with id c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.649182    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod96e17dd5-4e30-48aa-8f37-e42db89652da/crio-2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Error finding container 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Status 404 returned error can't find the container with id 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.649522    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod6f25c10b051e82f6e13ba3c3d00847e1/crio-c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Error finding container c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Status 404 returned error can't find the container with id c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.649823    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod53e3b884-181c-4bbd-a913-dc0e653a6049/crio-b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Error finding container b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Status 404 returned error can't find the container with id b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.650036    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda6acb99e901a4d3e69f051bbe79cf00c/crio-181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Error finding container 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Status 404 returned error can't find the container with id 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f
	Mar 14 00:24:52 multinode-507871 kubelet[3408]: E0314 00:24:52.650233    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podfa6241da-7a44-4dd4-b00a-b3a008151fb5/crio-9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Error finding container 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Status 404 returned error can't find the container with id 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.587300    3408 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 00:25:52 multinode-507871 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 00:25:52 multinode-507871 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 00:25:52 multinode-507871 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 00:25:52 multinode-507871 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.647345    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod6f25c10b051e82f6e13ba3c3d00847e1/crio-c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Error finding container c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5: Status 404 returned error can't find the container with id c2de6818ded13d60c9242999978e70ee95c068ac7d312ac7dd26121cda7573e5
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.647826    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podfa6241da-7a44-4dd4-b00a-b3a008151fb5/crio-9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Error finding container 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0: Status 404 returned error can't find the container with id 9cce3068be90d96c9ab2d12ab309fda7ea8035c668b5da2b183f9dbbe07615d0
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.648174    3408 manager.go:1106] Failed to create existing container: /kubepods/pod6d2baacd-f40a-400c-b587-a4be4745ee78/crio-8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Error finding container 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77: Status 404 returned error can't find the container with id 8c116aa0b59e7226cfb305ab844b86142c7d3a056f06b1f037ec0b257c725d77
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.648483    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda6acb99e901a4d3e69f051bbe79cf00c/crio-181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Error finding container 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f: Status 404 returned error can't find the container with id 181b46e7a6ab1b51daf2efbd4fe3496aca779b6f455c58b5a9bf3a214ef3297f
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.648971    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd3269078ba2f0710950742881b1ad45f/crio-c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Error finding container c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99: Status 404 returned error can't find the container with id c239bd48e20132749e8a99bcf9cada2ca6a6a3bfca0ad9444be09c1488330d99
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.649244    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod96e17dd5-4e30-48aa-8f37-e42db89652da/crio-2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Error finding container 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226: Status 404 returned error can't find the container with id 2af054c67bb569328988d8ce7df44173c04e710f8f728c2b85ceb13c39246226
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.649641    3408 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod8de44d7b-d708-4151-a9d2-331fe7733508/crio-a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Error finding container a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13: Status 404 returned error can't find the container with id a6027e68990444c54823d49a99a917464d243639393621285abb44b87950fe13
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.649947    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda57cfd10e401230b197eb5cbd3693e85/crio-8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Error finding container 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1: Status 404 returned error can't find the container with id 8548b21c08449db79e298f1b16de79e2bb538d289261f857159abf8b3c4783a1
	Mar 14 00:25:52 multinode-507871 kubelet[3408]: E0314 00:25:52.650304    3408 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod53e3b884-181c-4bbd-a913-dc0e653a6049/crio-b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Error finding container b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab: Status 404 returned error can't find the container with id b2755b1eac52d91deee439af3f26b0b38d24017227f902d402aebd25cc045fab
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:26:43.463394   40553 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-507871 -n multinode-507871
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-507871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                    
x
+
TestPreload (302.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-449024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-449024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m39.665250754s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-449024 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-449024 image pull gcr.io/k8s-minikube/busybox: (2.399518493s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-449024
E0314 00:33:36.336654   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:34:44.448884   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-449024: exit status 82 (2m0.50998227s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-449024"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-449024 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-14 00:35:09.254559024 +0000 UTC m=+4137.324320509
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-449024 -n test-preload-449024
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-449024 -n test-preload-449024: exit status 3 (18.645296469s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:35:27.895128   42925 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	E0314 00:35:27.895152   42925 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-449024" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-449024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-449024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-449024: (1.114259333s)
--- FAIL: TestPreload (302.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (372.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m39.074667921s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-552430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-552430" primary control-plane node in "kubernetes-upgrade-552430" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:40:39.692562   48503 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:40:39.692689   48503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:40:39.692702   48503 out.go:304] Setting ErrFile to fd 2...
	I0314 00:40:39.692708   48503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:40:39.692898   48503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:40:39.693444   48503 out.go:298] Setting JSON to false
	I0314 00:40:39.694455   48503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4983,"bootTime":1710371857,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:40:39.694526   48503 start.go:139] virtualization: kvm guest
	I0314 00:40:39.697090   48503 out.go:177] * [kubernetes-upgrade-552430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:40:39.698629   48503 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:40:39.700131   48503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:40:39.698663   48503 notify.go:220] Checking for updates...
	I0314 00:40:39.701606   48503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:40:39.703011   48503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:40:39.704323   48503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:40:39.705705   48503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:40:39.707520   48503 config.go:182] Loaded profile config "cert-expiration-577166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:40:39.707659   48503 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:40:39.707733   48503 config.go:182] Loaded profile config "stopped-upgrade-848457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0314 00:40:39.707815   48503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:40:39.745156   48503 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:40:39.746324   48503 start.go:297] selected driver: kvm2
	I0314 00:40:39.746335   48503 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:40:39.746345   48503 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:40:39.747137   48503 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:40:39.747199   48503 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:40:39.762118   48503 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:40:39.762180   48503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:40:39.762428   48503 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:40:39.762455   48503 cni.go:84] Creating CNI manager for ""
	I0314 00:40:39.762463   48503 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:40:39.762472   48503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 00:40:39.762547   48503 start.go:340] cluster config:
	{Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:40:39.762669   48503 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:40:39.764324   48503 out.go:177] * Starting "kubernetes-upgrade-552430" primary control-plane node in "kubernetes-upgrade-552430" cluster
	I0314 00:40:39.765379   48503 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:40:39.765410   48503 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:40:39.765423   48503 cache.go:56] Caching tarball of preloaded images
	I0314 00:40:39.765499   48503 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:40:39.765513   48503 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:40:39.765619   48503 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/config.json ...
	I0314 00:40:39.765645   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/config.json: {Name:mke9ee295cd997e3a5a90bf9efcdbd4db639d0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:40:39.765788   48503 start.go:360] acquireMachinesLock for kubernetes-upgrade-552430: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:40:45.352156   48503 start.go:364] duration metric: took 5.586340104s to acquireMachinesLock for "kubernetes-upgrade-552430"
	I0314 00:40:45.352237   48503 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:40:45.352367   48503 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 00:40:45.354460   48503 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 00:40:45.354686   48503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:40:45.354731   48503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:40:45.375336   48503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I0314 00:40:45.375828   48503 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:40:45.376462   48503 main.go:141] libmachine: Using API Version  1
	I0314 00:40:45.376486   48503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:40:45.376847   48503 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:40:45.377049   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:40:45.377187   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:40:45.377348   48503 start.go:159] libmachine.API.Create for "kubernetes-upgrade-552430" (driver="kvm2")
	I0314 00:40:45.377379   48503 client.go:168] LocalClient.Create starting
	I0314 00:40:45.377411   48503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0314 00:40:45.377448   48503 main.go:141] libmachine: Decoding PEM data...
	I0314 00:40:45.377476   48503 main.go:141] libmachine: Parsing certificate...
	I0314 00:40:45.377544   48503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0314 00:40:45.377584   48503 main.go:141] libmachine: Decoding PEM data...
	I0314 00:40:45.377606   48503 main.go:141] libmachine: Parsing certificate...
	I0314 00:40:45.377640   48503 main.go:141] libmachine: Running pre-create checks...
	I0314 00:40:45.377658   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .PreCreateCheck
	I0314 00:40:45.378001   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetConfigRaw
	I0314 00:40:45.378424   48503 main.go:141] libmachine: Creating machine...
	I0314 00:40:45.378443   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .Create
	I0314 00:40:45.378677   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Creating KVM machine...
	I0314 00:40:45.380046   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found existing default KVM network
	I0314 00:40:45.381496   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:45.381322   48605 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:b9:5c} reservation:<nil>}
	I0314 00:40:45.382811   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:45.382657   48605 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:db:25:bb} reservation:<nil>}
	I0314 00:40:45.384218   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:45.384089   48605 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205d40}
	I0314 00:40:45.384246   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | created network xml: 
	I0314 00:40:45.384255   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | <network>
	I0314 00:40:45.384266   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   <name>mk-kubernetes-upgrade-552430</name>
	I0314 00:40:45.384279   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   <dns enable='no'/>
	I0314 00:40:45.384287   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   
	I0314 00:40:45.384297   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0314 00:40:45.384306   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |     <dhcp>
	I0314 00:40:45.384315   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0314 00:40:45.384324   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |     </dhcp>
	I0314 00:40:45.384331   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   </ip>
	I0314 00:40:45.384340   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG |   
	I0314 00:40:45.384347   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | </network>
	I0314 00:40:45.384357   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | 
	I0314 00:40:45.389928   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | trying to create private KVM network mk-kubernetes-upgrade-552430 192.168.61.0/24...
	I0314 00:40:45.467459   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | private KVM network mk-kubernetes-upgrade-552430 192.168.61.0/24 created
	I0314 00:40:45.467489   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430 ...
	I0314 00:40:45.467505   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:45.467430   48605 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:40:45.467525   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 00:40:45.467615   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 00:40:45.710927   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:45.710791   48605 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa...
	I0314 00:40:46.001795   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:46.001664   48605 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/kubernetes-upgrade-552430.rawdisk...
	I0314 00:40:46.001828   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Writing magic tar header
	I0314 00:40:46.001844   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Writing SSH key tar header
	I0314 00:40:46.001857   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:46.001777   48605 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430 ...
	I0314 00:40:46.001873   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430
	I0314 00:40:46.001905   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430 (perms=drwx------)
	I0314 00:40:46.001917   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0314 00:40:46.001931   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0314 00:40:46.001944   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:40:46.001954   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0314 00:40:46.001977   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 00:40:46.001986   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home/jenkins
	I0314 00:40:46.001998   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0314 00:40:46.002019   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0314 00:40:46.002029   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 00:40:46.002039   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 00:40:46.002047   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Creating domain...
	I0314 00:40:46.002071   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Checking permissions on dir: /home
	I0314 00:40:46.002091   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Skipping /home - not owner
	I0314 00:40:46.003668   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) define libvirt domain using xml: 
	I0314 00:40:46.003694   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) <domain type='kvm'>
	I0314 00:40:46.003705   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <name>kubernetes-upgrade-552430</name>
	I0314 00:40:46.003713   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <memory unit='MiB'>2200</memory>
	I0314 00:40:46.003722   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <vcpu>2</vcpu>
	I0314 00:40:46.003730   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <features>
	I0314 00:40:46.003738   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <acpi/>
	I0314 00:40:46.003747   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <apic/>
	I0314 00:40:46.003755   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <pae/>
	I0314 00:40:46.003762   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     
	I0314 00:40:46.003793   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   </features>
	I0314 00:40:46.003825   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <cpu mode='host-passthrough'>
	I0314 00:40:46.003855   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   
	I0314 00:40:46.003884   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   </cpu>
	I0314 00:40:46.003903   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <os>
	I0314 00:40:46.003919   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <type>hvm</type>
	I0314 00:40:46.003933   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <boot dev='cdrom'/>
	I0314 00:40:46.003960   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <boot dev='hd'/>
	I0314 00:40:46.003975   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <bootmenu enable='no'/>
	I0314 00:40:46.003982   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   </os>
	I0314 00:40:46.003991   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   <devices>
	I0314 00:40:46.004003   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <disk type='file' device='cdrom'>
	I0314 00:40:46.004021   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/boot2docker.iso'/>
	I0314 00:40:46.004036   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <target dev='hdc' bus='scsi'/>
	I0314 00:40:46.004048   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <readonly/>
	I0314 00:40:46.004058   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </disk>
	I0314 00:40:46.004070   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <disk type='file' device='disk'>
	I0314 00:40:46.004083   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 00:40:46.004101   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/kubernetes-upgrade-552430.rawdisk'/>
	I0314 00:40:46.004112   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <target dev='hda' bus='virtio'/>
	I0314 00:40:46.004142   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </disk>
	I0314 00:40:46.004169   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <interface type='network'>
	I0314 00:40:46.004182   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <source network='mk-kubernetes-upgrade-552430'/>
	I0314 00:40:46.004190   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <model type='virtio'/>
	I0314 00:40:46.004199   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </interface>
	I0314 00:40:46.004207   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <interface type='network'>
	I0314 00:40:46.004216   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <source network='default'/>
	I0314 00:40:46.004222   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <model type='virtio'/>
	I0314 00:40:46.004231   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </interface>
	I0314 00:40:46.004243   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <serial type='pty'>
	I0314 00:40:46.004251   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <target port='0'/>
	I0314 00:40:46.004258   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </serial>
	I0314 00:40:46.004269   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <console type='pty'>
	I0314 00:40:46.004277   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <target type='serial' port='0'/>
	I0314 00:40:46.004285   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </console>
	I0314 00:40:46.004292   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     <rng model='virtio'>
	I0314 00:40:46.004303   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)       <backend model='random'>/dev/random</backend>
	I0314 00:40:46.004325   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     </rng>
	I0314 00:40:46.004335   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     
	I0314 00:40:46.004341   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)     
	I0314 00:40:46.004349   48503 main.go:141] libmachine: (kubernetes-upgrade-552430)   </devices>
	I0314 00:40:46.004358   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) </domain>
	I0314 00:40:46.004370   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) 
	I0314 00:40:46.008877   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:6f:d1:55 in network default
	I0314 00:40:46.009691   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Ensuring networks are active...
	I0314 00:40:46.009724   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:46.010687   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Ensuring network default is active
	I0314 00:40:46.011114   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Ensuring network mk-kubernetes-upgrade-552430 is active
	I0314 00:40:46.011616   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Getting domain xml...
	I0314 00:40:46.012482   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Creating domain...
	I0314 00:40:47.939445   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Waiting to get IP...
	I0314 00:40:47.940421   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:47.940859   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:47.940888   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:47.940825   48605 retry.go:31] will retry after 241.329909ms: waiting for machine to come up
	I0314 00:40:48.187813   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.188430   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.188462   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:48.188349   48605 retry.go:31] will retry after 281.94694ms: waiting for machine to come up
	I0314 00:40:48.472221   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.472820   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.472848   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:48.472736   48605 retry.go:31] will retry after 391.282292ms: waiting for machine to come up
	I0314 00:40:48.865395   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.865990   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:48.866018   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:48.865948   48605 retry.go:31] will retry after 431.371013ms: waiting for machine to come up
	I0314 00:40:49.298859   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:49.301255   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:49.301283   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:49.301069   48605 retry.go:31] will retry after 608.800582ms: waiting for machine to come up
	I0314 00:40:49.912060   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:49.912585   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:49.912615   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:49.912542   48605 retry.go:31] will retry after 834.887773ms: waiting for machine to come up
	I0314 00:40:50.749089   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:50.749951   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:50.749991   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:50.749856   48605 retry.go:31] will retry after 831.380459ms: waiting for machine to come up
	I0314 00:40:51.582806   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:51.583362   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:51.583391   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:51.583311   48605 retry.go:31] will retry after 1.178532973s: waiting for machine to come up
	I0314 00:40:52.763849   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:52.764267   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:52.764293   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:52.764234   48605 retry.go:31] will retry after 1.730533915s: waiting for machine to come up
	I0314 00:40:54.496663   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:54.497136   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:54.497165   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:54.497068   48605 retry.go:31] will retry after 2.085407999s: waiting for machine to come up
	I0314 00:40:56.583900   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:56.584447   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:56.584477   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:56.584403   48605 retry.go:31] will retry after 2.863781707s: waiting for machine to come up
	I0314 00:40:59.451548   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:40:59.452033   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:40:59.452065   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:40:59.451983   48605 retry.go:31] will retry after 3.01156639s: waiting for machine to come up
	I0314 00:41:02.464880   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:02.465344   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:41:02.465380   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:41:02.465293   48605 retry.go:31] will retry after 3.423471858s: waiting for machine to come up
	I0314 00:41:05.890211   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:05.890642   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find current IP address of domain kubernetes-upgrade-552430 in network mk-kubernetes-upgrade-552430
	I0314 00:41:05.890667   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | I0314 00:41:05.890601   48605 retry.go:31] will retry after 3.971997294s: waiting for machine to come up
	I0314 00:41:09.864410   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:09.864984   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Found IP for machine: 192.168.61.34
	I0314 00:41:09.865004   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Reserving static IP address...
	I0314 00:41:09.865022   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has current primary IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:09.865503   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-552430", mac: "52:54:00:eb:57:12", ip: "192.168.61.34"} in network mk-kubernetes-upgrade-552430
	I0314 00:41:09.941068   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Reserved static IP address: 192.168.61.34
	I0314 00:41:09.941095   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Getting to WaitForSSH function...
	I0314 00:41:09.941105   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Waiting for SSH to be available...
	I0314 00:41:09.943758   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:09.944094   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:09.944129   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:09.944214   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Using SSH client type: external
	I0314 00:41:09.944241   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa (-rw-------)
	I0314 00:41:09.944274   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:41:09.944290   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | About to run SSH command:
	I0314 00:41:09.944306   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | exit 0
	I0314 00:41:10.070890   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | SSH cmd err, output: <nil>: 
	I0314 00:41:10.071150   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) KVM machine creation complete!
	I0314 00:41:10.071522   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetConfigRaw
	I0314 00:41:10.072057   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:10.072236   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:10.072409   48503 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 00:41:10.072429   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetState
	I0314 00:41:10.073861   48503 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 00:41:10.073879   48503 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 00:41:10.073886   48503 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 00:41:10.073896   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.076173   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.076658   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.076695   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.076833   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.077018   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.077158   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.077326   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.077526   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:10.077773   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:10.077788   48503 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 00:41:10.186459   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:41:10.186482   48503 main.go:141] libmachine: Detecting the provisioner...
	I0314 00:41:10.186494   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.189553   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.189899   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.189934   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.190068   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.190260   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.190421   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.190537   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.190705   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:10.190906   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:10.190929   48503 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 00:41:10.300023   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 00:41:10.300087   48503 main.go:141] libmachine: found compatible host: buildroot
	I0314 00:41:10.300095   48503 main.go:141] libmachine: Provisioning with buildroot...
	I0314 00:41:10.300102   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:41:10.300327   48503 buildroot.go:166] provisioning hostname "kubernetes-upgrade-552430"
	I0314 00:41:10.300344   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:41:10.300548   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.303316   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.303660   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.303705   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.303843   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.304047   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.304221   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.304375   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.304550   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:10.304733   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:10.304751   48503 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-552430 && echo "kubernetes-upgrade-552430" | sudo tee /etc/hostname
	I0314 00:41:10.426932   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-552430
	
	I0314 00:41:10.426959   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.429759   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.430158   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.430188   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.430491   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.430703   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.430999   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.431199   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.431388   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:10.431604   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:10.431622   48503 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-552430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-552430/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-552430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:41:10.549419   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:41:10.549457   48503 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:41:10.549489   48503 buildroot.go:174] setting up certificates
	I0314 00:41:10.549502   48503 provision.go:84] configureAuth start
	I0314 00:41:10.549516   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:41:10.549833   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:41:10.552591   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.552919   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.552949   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.553074   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.555699   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.556079   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.556119   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.556269   48503 provision.go:143] copyHostCerts
	I0314 00:41:10.556338   48503 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:41:10.556351   48503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:41:10.556416   48503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:41:10.556536   48503 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:41:10.556546   48503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:41:10.556577   48503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:41:10.556666   48503 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:41:10.556676   48503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:41:10.556704   48503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:41:10.556779   48503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-552430 san=[127.0.0.1 192.168.61.34 kubernetes-upgrade-552430 localhost minikube]
	I0314 00:41:10.791853   48503 provision.go:177] copyRemoteCerts
	I0314 00:41:10.791920   48503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:41:10.791952   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.794946   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.795256   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.795286   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.795448   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.795623   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.795748   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.795916   48503 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:41:10.882303   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:41:10.912888   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0314 00:41:10.942808   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:41:10.968928   48503 provision.go:87] duration metric: took 419.414229ms to configureAuth
	I0314 00:41:10.968951   48503 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:41:10.969099   48503 config.go:182] Loaded profile config "kubernetes-upgrade-552430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:41:10.969168   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:10.971687   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.972048   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:10.972081   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:10.972265   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:10.972471   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.972615   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:10.972762   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:10.972919   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:10.973074   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:10.973090   48503 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:41:11.253514   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:41:11.253552   48503 main.go:141] libmachine: Checking connection to Docker...
	I0314 00:41:11.253566   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetURL
	I0314 00:41:11.254903   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | Using libvirt version 6000000
	I0314 00:41:11.257463   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.257797   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.257827   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.258022   48503 main.go:141] libmachine: Docker is up and running!
	I0314 00:41:11.258044   48503 main.go:141] libmachine: Reticulating splines...
	I0314 00:41:11.258050   48503 client.go:171] duration metric: took 25.880664583s to LocalClient.Create
	I0314 00:41:11.258071   48503 start.go:167] duration metric: took 25.880726514s to libmachine.API.Create "kubernetes-upgrade-552430"
	I0314 00:41:11.258080   48503 start.go:293] postStartSetup for "kubernetes-upgrade-552430" (driver="kvm2")
	I0314 00:41:11.258095   48503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:41:11.258116   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:11.258313   48503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:41:11.258341   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:11.260631   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.261047   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.261093   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.261233   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:11.261410   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:11.261554   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:11.261684   48503 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:41:11.346369   48503 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:41:11.350997   48503 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:41:11.351026   48503 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:41:11.351092   48503 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:41:11.351176   48503 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:41:11.351266   48503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:41:11.361310   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:41:11.386847   48503 start.go:296] duration metric: took 128.753859ms for postStartSetup
	I0314 00:41:11.386908   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetConfigRaw
	I0314 00:41:11.387549   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:41:11.390306   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.390707   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.390733   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.391000   48503 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/config.json ...
	I0314 00:41:11.391238   48503 start.go:128] duration metric: took 26.038856835s to createHost
	I0314 00:41:11.391263   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:11.393603   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.393936   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.393966   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.394105   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:11.394299   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:11.394451   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:11.394599   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:11.394822   48503 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:11.394991   48503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:41:11.395007   48503 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 00:41:11.503834   48503 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710376871.486881595
	
	I0314 00:41:11.503855   48503 fix.go:216] guest clock: 1710376871.486881595
	I0314 00:41:11.503862   48503 fix.go:229] Guest: 2024-03-14 00:41:11.486881595 +0000 UTC Remote: 2024-03-14 00:41:11.391252836 +0000 UTC m=+31.754365443 (delta=95.628759ms)
	I0314 00:41:11.503891   48503 fix.go:200] guest clock delta is within tolerance: 95.628759ms
	I0314 00:41:11.503897   48503 start.go:83] releasing machines lock for "kubernetes-upgrade-552430", held for 26.151695573s
	I0314 00:41:11.503928   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:11.504204   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:41:11.507205   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.507640   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.507675   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.507832   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:11.508458   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:11.508696   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:41:11.508756   48503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:41:11.508801   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:11.509067   48503 ssh_runner.go:195] Run: cat /version.json
	I0314 00:41:11.509096   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:41:11.511566   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.511869   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.511939   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.511962   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.512259   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:11.512373   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:11.512408   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:11.512450   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:11.512549   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:41:11.512641   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:11.512711   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:41:11.512780   48503 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:41:11.512818   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:41:11.512970   48503 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:41:11.592570   48503 ssh_runner.go:195] Run: systemctl --version
	I0314 00:41:11.631984   48503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:41:11.799741   48503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:41:11.807185   48503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:41:11.807270   48503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:41:11.828505   48503 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:41:11.828533   48503 start.go:494] detecting cgroup driver to use...
	I0314 00:41:11.828600   48503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:41:11.849907   48503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:41:11.865034   48503 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:41:11.865108   48503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:41:11.880001   48503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:41:11.895295   48503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:41:12.023887   48503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:41:12.180658   48503 docker.go:233] disabling docker service ...
	I0314 00:41:12.180716   48503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:41:12.198324   48503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:41:12.215797   48503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:41:12.367427   48503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:41:12.503754   48503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:41:12.518974   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:41:12.538603   48503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:41:12.538659   48503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:12.549253   48503 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:41:12.549313   48503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:12.560125   48503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:12.571616   48503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:12.582950   48503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:41:12.594022   48503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:41:12.605145   48503 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:41:12.605199   48503 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:41:12.618909   48503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:41:12.631371   48503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:41:12.789569   48503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:41:12.960506   48503 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:41:12.960575   48503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:41:12.966266   48503 start.go:562] Will wait 60s for crictl version
	I0314 00:41:12.966328   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:12.970638   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:41:13.017232   48503 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:41:13.017323   48503 ssh_runner.go:195] Run: crio --version
	I0314 00:41:13.047135   48503 ssh_runner.go:195] Run: crio --version
	I0314 00:41:13.084537   48503 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:41:13.085752   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:41:13.089047   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:13.089501   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:41:13.089537   48503 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:41:13.089819   48503 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:41:13.094348   48503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:41:13.110230   48503 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:41:13.110339   48503 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:41:13.110388   48503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:41:13.148600   48503 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:41:13.148664   48503 ssh_runner.go:195] Run: which lz4
	I0314 00:41:13.153048   48503 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 00:41:13.157612   48503 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:41:13.157644   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:41:15.146528   48503 crio.go:444] duration metric: took 1.993525098s to copy over tarball
	I0314 00:41:15.146626   48503 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:41:18.092995   48503 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946327724s)
	I0314 00:41:18.093029   48503 crio.go:451] duration metric: took 2.946463842s to extract the tarball
	I0314 00:41:18.093040   48503 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:41:18.139663   48503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:41:18.194896   48503 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:41:18.194923   48503 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:41:18.194974   48503 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:41:18.195004   48503 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:41:18.195056   48503 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:41:18.195011   48503 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:41:18.195305   48503 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:41:18.195548   48503 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:41:18.195597   48503 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:41:18.195811   48503 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:41:18.196566   48503 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:41:18.196621   48503 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:41:18.196566   48503 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:41:18.196593   48503 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:41:18.196618   48503 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:41:18.196971   48503 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:41:18.196983   48503 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:41:18.196996   48503 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:41:18.428171   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:41:18.460865   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:41:18.460956   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:41:18.464099   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:41:18.466106   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:41:18.466345   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:41:18.485122   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:41:18.493369   48503 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:41:18.493406   48503 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:41:18.493447   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.621788   48503 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:41:18.621961   48503 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:41:18.622043   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.635187   48503 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:41:18.635287   48503 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:41:18.635383   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.644246   48503 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:41:18.644409   48503 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:41:18.644503   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.645022   48503 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:41:18.645097   48503 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:41:18.645167   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.693111   48503 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:41:18.693158   48503 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:41:18.693179   48503 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:41:18.693203   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.693221   48503 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:41:18.693286   48503 ssh_runner.go:195] Run: which crictl
	I0314 00:41:18.693292   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:41:18.693299   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:41:18.693352   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:41:18.693379   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:41:18.693403   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:41:18.837368   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:41:18.837516   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:41:18.837550   48503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:41:18.837757   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:41:18.841232   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:41:18.841327   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:41:18.841418   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:41:18.887599   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:41:18.888317   48503 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:41:19.226113   48503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:41:19.381047   48503 cache_images.go:92] duration metric: took 1.186105423s to LoadCachedImages
	W0314 00:41:19.381139   48503 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0314 00:41:19.381159   48503 kubeadm.go:928] updating node { 192.168.61.34 8443 v1.20.0 crio true true} ...
	I0314 00:41:19.381304   48503 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-552430 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:41:19.381402   48503 ssh_runner.go:195] Run: crio config
	I0314 00:41:19.440633   48503 cni.go:84] Creating CNI manager for ""
	I0314 00:41:19.440662   48503 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:41:19.440673   48503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:41:19.440690   48503 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-552430 NodeName:kubernetes-upgrade-552430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:41:19.440830   48503 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-552430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:41:19.440907   48503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:41:19.451903   48503 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:41:19.452004   48503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:41:19.462012   48503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0314 00:41:19.480379   48503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:41:19.502329   48503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 00:41:19.523528   48503 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0314 00:41:19.527476   48503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:41:19.540863   48503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:41:19.674168   48503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:41:19.692027   48503 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430 for IP: 192.168.61.34
	I0314 00:41:19.692064   48503 certs.go:194] generating shared ca certs ...
	I0314 00:41:19.692131   48503 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:19.692358   48503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:41:19.692435   48503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:41:19.692454   48503 certs.go:256] generating profile certs ...
	I0314 00:41:19.692521   48503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.key
	I0314 00:41:19.692540   48503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.crt with IP's: []
	I0314 00:41:19.805742   48503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.crt ...
	I0314 00:41:19.805779   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.crt: {Name:mke06264a6efd0d3b61f165b2eb95a12d68364a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:19.805982   48503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.key ...
	I0314 00:41:19.806013   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.key: {Name:mkf42a309be492ebfab89c185ac6b6cec7e002ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:19.806150   48503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key.bdc00de9
	I0314 00:41:19.806175   48503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt.bdc00de9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.34]
	I0314 00:41:19.910222   48503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt.bdc00de9 ...
	I0314 00:41:19.910247   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt.bdc00de9: {Name:mk804119c114e6b0273dd0d96ab1a9de9bb195ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:19.910430   48503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key.bdc00de9 ...
	I0314 00:41:19.910451   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key.bdc00de9: {Name:mk3df4c9ecceb924281a8c893500ff42974ca220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:19.910546   48503 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt.bdc00de9 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt
	I0314 00:41:19.910635   48503 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key.bdc00de9 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key
	I0314 00:41:19.910708   48503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key
	I0314 00:41:19.910729   48503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.crt with IP's: []
	I0314 00:41:20.155524   48503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.crt ...
	I0314 00:41:20.155558   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.crt: {Name:mk3ae7b4e138925018d36d801396de4c72f1f1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:20.155729   48503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key ...
	I0314 00:41:20.155746   48503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key: {Name:mk2f5ad9f7d1dd472c3507e4a2702c448c7eed8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:20.155939   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:41:20.155987   48503 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:41:20.156003   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:41:20.156040   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:41:20.156083   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:41:20.156116   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:41:20.156183   48503 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:41:20.156744   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:41:20.186377   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:41:20.216663   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:41:20.249217   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:41:20.278580   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 00:41:20.307367   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:41:20.338141   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:41:20.365622   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:41:20.392782   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:41:20.421069   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:41:20.450438   48503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:41:20.477582   48503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:41:20.502888   48503 ssh_runner.go:195] Run: openssl version
	I0314 00:41:20.509974   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:41:20.522631   48503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:41:20.528948   48503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:41:20.529018   48503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:41:20.538714   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:41:20.560334   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:41:20.576724   48503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:20.582132   48503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:20.582219   48503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:20.588294   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:41:20.606978   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:41:20.621121   48503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:41:20.628049   48503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:41:20.628117   48503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:41:20.634628   48503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:41:20.646547   48503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:41:20.651195   48503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 00:41:20.651255   48503 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:41:20.651327   48503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:41:20.651369   48503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:41:20.692681   48503 cri.go:89] found id: ""
	I0314 00:41:20.692749   48503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 00:41:20.705031   48503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:41:20.716055   48503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:41:20.726925   48503 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:41:20.726952   48503 kubeadm.go:156] found existing configuration files:
	
	I0314 00:41:20.727005   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:41:20.737220   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:41:20.737319   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:41:20.748206   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:41:20.758672   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:41:20.758746   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:41:20.769271   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:41:20.781359   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:41:20.781419   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:41:20.791494   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:41:20.801494   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:41:20.801563   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:41:20.812524   48503 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 00:41:20.957156   48503 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 00:41:20.957286   48503 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:41:21.137161   48503 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:41:21.137322   48503 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:41:21.137449   48503 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:41:21.427218   48503 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:41:21.430052   48503 out.go:204]   - Generating certificates and keys ...
	I0314 00:41:21.430265   48503 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 00:41:21.430372   48503 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 00:41:21.861648   48503 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 00:41:21.990061   48503 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 00:41:22.372751   48503 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 00:41:22.531166   48503 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 00:41:22.736980   48503 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 00:41:22.737213   48503 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	I0314 00:41:22.828960   48503 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 00:41:22.829275   48503 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	I0314 00:41:22.927971   48503 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 00:41:23.108331   48503 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 00:41:23.339184   48503 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 00:41:23.340014   48503 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 00:41:23.723332   48503 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 00:41:23.824749   48503 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 00:41:24.235504   48503 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 00:41:24.768193   48503 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 00:41:24.788583   48503 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 00:41:24.788699   48503 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 00:41:24.788735   48503 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 00:41:24.944478   48503 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 00:41:24.946433   48503 out.go:204]   - Booting up control plane ...
	I0314 00:41:24.946572   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 00:41:24.966647   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 00:41:24.967636   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 00:41:24.969540   48503 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 00:41:24.979780   48503 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 00:42:04.976856   48503 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:42:04.977648   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:04.977845   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:42:09.978145   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:09.978403   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:42:19.978798   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:19.979070   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:42:39.980099   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:39.980378   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:43:19.980987   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:43:19.981243   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:43:19.981263   48503 kubeadm.go:309] 
	I0314 00:43:19.981340   48503 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 00:43:19.981406   48503 kubeadm.go:309] 		timed out waiting for the condition
	I0314 00:43:19.981417   48503 kubeadm.go:309] 
	I0314 00:43:19.981476   48503 kubeadm.go:309] 	This error is likely caused by:
	I0314 00:43:19.981550   48503 kubeadm.go:309] 		- The kubelet is not running
	I0314 00:43:19.981768   48503 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 00:43:19.981778   48503 kubeadm.go:309] 
	I0314 00:43:19.982058   48503 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 00:43:19.982188   48503 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 00:43:19.982347   48503 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 00:43:19.982389   48503 kubeadm.go:309] 
	I0314 00:43:19.982832   48503 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 00:43:19.983002   48503 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 00:43:19.983026   48503 kubeadm.go:309] 
	I0314 00:43:19.983250   48503 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 00:43:19.983645   48503 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 00:43:19.983904   48503 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 00:43:19.984119   48503 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 00:43:19.984143   48503 kubeadm.go:309] 
	I0314 00:43:19.985336   48503 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 00:43:19.985562   48503 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 00:43:19.985967   48503 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 00:43:19.986278   48503 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-552430 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 00:43:19.986415   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 00:43:21.265079   48503 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.27862435s)
	I0314 00:43:21.265172   48503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:43:21.281100   48503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:43:21.291523   48503 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:43:21.291545   48503 kubeadm.go:156] found existing configuration files:
	
	I0314 00:43:21.291583   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:43:21.301544   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:43:21.301594   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:43:21.311941   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:43:21.322276   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:43:21.322345   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:43:21.332978   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:43:21.343320   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:43:21.343384   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:43:21.353969   48503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:43:21.364110   48503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:43:21.364182   48503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:43:21.374866   48503 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 00:43:21.633978   48503 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 00:45:17.927781   48503 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 00:45:17.927875   48503 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 00:45:17.929832   48503 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 00:45:17.929895   48503 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:45:17.929994   48503 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:45:17.930115   48503 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:45:17.930225   48503 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:45:17.930293   48503 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:45:17.932306   48503 out.go:204]   - Generating certificates and keys ...
	I0314 00:45:17.932405   48503 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 00:45:17.932487   48503 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 00:45:17.932596   48503 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 00:45:17.932672   48503 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 00:45:17.932750   48503 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 00:45:17.932814   48503 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 00:45:17.932887   48503 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 00:45:17.932959   48503 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 00:45:17.933046   48503 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 00:45:17.933147   48503 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 00:45:17.933195   48503 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 00:45:17.933264   48503 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 00:45:17.933323   48503 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 00:45:17.933384   48503 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 00:45:17.933461   48503 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 00:45:17.933542   48503 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 00:45:17.933661   48503 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 00:45:17.933724   48503 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 00:45:17.933753   48503 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 00:45:17.933801   48503 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 00:45:17.935372   48503 out.go:204]   - Booting up control plane ...
	I0314 00:45:17.935467   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 00:45:17.935556   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 00:45:17.935658   48503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 00:45:17.935753   48503 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 00:45:17.935914   48503 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 00:45:17.935988   48503 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:45:17.936083   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:45:17.936358   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:45:17.936469   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:45:17.936680   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:45:17.936768   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:45:17.936989   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:45:17.937086   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:45:17.937302   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:45:17.937399   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:45:17.937633   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:45:17.937645   48503 kubeadm.go:309] 
	I0314 00:45:17.937688   48503 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 00:45:17.937769   48503 kubeadm.go:309] 		timed out waiting for the condition
	I0314 00:45:17.937809   48503 kubeadm.go:309] 
	I0314 00:45:17.937861   48503 kubeadm.go:309] 	This error is likely caused by:
	I0314 00:45:17.937908   48503 kubeadm.go:309] 		- The kubelet is not running
	I0314 00:45:17.938031   48503 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 00:45:17.938040   48503 kubeadm.go:309] 
	I0314 00:45:17.938148   48503 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 00:45:17.938194   48503 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 00:45:17.938230   48503 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 00:45:17.938238   48503 kubeadm.go:309] 
	I0314 00:45:17.938353   48503 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 00:45:17.938494   48503 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 00:45:17.938512   48503 kubeadm.go:309] 
	I0314 00:45:17.938649   48503 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 00:45:17.938800   48503 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 00:45:17.938886   48503 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 00:45:17.938994   48503 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 00:45:17.939021   48503 kubeadm.go:309] 
	I0314 00:45:17.939074   48503 kubeadm.go:393] duration metric: took 3m57.287822429s to StartCluster
	I0314 00:45:17.939119   48503 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:45:17.939181   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:45:17.991125   48503 cri.go:89] found id: ""
	I0314 00:45:17.991147   48503 logs.go:276] 0 containers: []
	W0314 00:45:17.991156   48503 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:45:17.991162   48503 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:45:17.991209   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:45:18.036551   48503 cri.go:89] found id: ""
	I0314 00:45:18.036579   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.036586   48503 logs.go:278] No container was found matching "etcd"
	I0314 00:45:18.036591   48503 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:45:18.036633   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:45:18.082721   48503 cri.go:89] found id: ""
	I0314 00:45:18.082752   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.082782   48503 logs.go:278] No container was found matching "coredns"
	I0314 00:45:18.082791   48503 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:45:18.082852   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:45:18.129607   48503 cri.go:89] found id: ""
	I0314 00:45:18.129637   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.129648   48503 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:45:18.129656   48503 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:45:18.129711   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:45:18.174144   48503 cri.go:89] found id: ""
	I0314 00:45:18.174171   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.174181   48503 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:45:18.174189   48503 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:45:18.174248   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:45:18.220330   48503 cri.go:89] found id: ""
	I0314 00:45:18.220359   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.220370   48503 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:45:18.220377   48503 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:45:18.220431   48503 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:45:18.275407   48503 cri.go:89] found id: ""
	I0314 00:45:18.275436   48503 logs.go:276] 0 containers: []
	W0314 00:45:18.275447   48503 logs.go:278] No container was found matching "kindnet"
	I0314 00:45:18.275457   48503 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:45:18.275472   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:45:18.428255   48503 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:45:18.428285   48503 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:45:18.428304   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:45:18.552844   48503 logs.go:123] Gathering logs for container status ...
	I0314 00:45:18.552881   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:45:18.601441   48503 logs.go:123] Gathering logs for kubelet ...
	I0314 00:45:18.601468   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:45:18.676440   48503 logs.go:123] Gathering logs for dmesg ...
	I0314 00:45:18.676479   48503 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0314 00:45:18.693149   48503 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 00:45:18.693209   48503 out.go:239] * 
	* 
	W0314 00:45:18.693300   48503 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 00:45:18.693326   48503 out.go:239] * 
	* 
	W0314 00:45:18.694560   48503 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:45:18.698272   48503 out.go:177] 
	W0314 00:45:18.699777   48503 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 00:45:18.699837   48503 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 00:45:18.699868   48503 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 00:45:18.701557   48503 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-552430
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-552430: (2.906183669s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-552430 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-552430 status --format={{.Host}}: exit status 7 (98.227983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.721976662s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-552430 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.988889ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-552430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-552430
	    minikube start -p kubernetes-upgrade-552430 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5524302 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-552430 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-552430 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (22.456178677s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-14 00:46:48.133988222 +0000 UTC m=+4836.203749706
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-552430 -n kubernetes-upgrade-552430
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-552430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-552430 logs -n 25: (1.741511183s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-326260 sudo ip a s                         | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	| ssh     | -p calico-326260 sudo ip r s                         | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo iptables                       | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | -t nat -L -n -v                                      |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-326260 pgrep                       | custom-flannel-326260 | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | -a kubelet                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo docker                         | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo cat                            | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-326260 sudo                                | calico-326260         | jenkins | v1.32.0 | 14 Mar 24 00:46 UTC | 14 Mar 24 00:46 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:46:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:46:25.736926   55561 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:46:25.737096   55561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:46:25.737107   55561 out.go:304] Setting ErrFile to fd 2...
	I0314 00:46:25.737112   55561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:46:25.737324   55561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:46:25.737921   55561 out.go:298] Setting JSON to false
	I0314 00:46:25.738979   55561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5329,"bootTime":1710371857,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:46:25.739055   55561 start.go:139] virtualization: kvm guest
	I0314 00:46:20.844353   53605 node_ready.go:53] node "custom-flannel-326260" has status "Ready":"False"
	I0314 00:46:23.344285   53605 node_ready.go:53] node "custom-flannel-326260" has status "Ready":"False"
	I0314 00:46:25.741984   55561 out.go:177] * [kubernetes-upgrade-552430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:46:25.743942   55561 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:46:25.743943   55561 notify.go:220] Checking for updates...
	I0314 00:46:25.745833   55561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:46:25.747154   55561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:46:25.748566   55561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:46:25.750293   55561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:46:25.751680   55561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:46:25.753845   55561 config.go:182] Loaded profile config "kubernetes-upgrade-552430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:46:25.754423   55561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:46:25.754469   55561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:46:25.769339   55561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0314 00:46:25.769870   55561 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:46:25.770490   55561 main.go:141] libmachine: Using API Version  1
	I0314 00:46:25.770513   55561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:46:25.770912   55561 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:46:25.771095   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:25.771389   55561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:46:25.771767   55561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:46:25.771807   55561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:46:25.787443   55561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0314 00:46:25.787938   55561 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:46:25.788451   55561 main.go:141] libmachine: Using API Version  1
	I0314 00:46:25.788486   55561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:46:25.788829   55561 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:46:25.789047   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:25.830397   55561 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:46:25.831650   55561 start.go:297] selected driver: kvm2
	I0314 00:46:25.831670   55561 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:46:25.831810   55561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:46:25.832930   55561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:46:25.833027   55561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:46:25.848778   55561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:46:25.849310   55561 cni.go:84] Creating CNI manager for ""
	I0314 00:46:25.849331   55561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:46:25.849383   55561 start.go:340] cluster config:
	{Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-552430 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:46:25.849529   55561 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:46:25.851780   55561 out.go:177] * Starting "kubernetes-upgrade-552430" primary control-plane node in "kubernetes-upgrade-552430" cluster
	I0314 00:46:24.972185   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:24.972802   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | unable to find current IP address of domain enable-default-cni-326260 in network mk-enable-default-cni-326260
	I0314 00:46:24.972826   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | I0314 00:46:24.972751   55230 retry.go:31] will retry after 3.906776632s: waiting for machine to come up
	I0314 00:46:25.853086   55561 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:46:25.853161   55561 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 00:46:25.853176   55561 cache.go:56] Caching tarball of preloaded images
	I0314 00:46:25.853273   55561 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:46:25.853288   55561 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 00:46:25.853382   55561 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/config.json ...
	I0314 00:46:25.853608   55561 start.go:360] acquireMachinesLock for kubernetes-upgrade-552430: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:46:30.552058   55561 start.go:364] duration metric: took 4.698404053s to acquireMachinesLock for "kubernetes-upgrade-552430"
	I0314 00:46:30.552120   55561 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:46:30.552191   55561 fix.go:54] fixHost starting: 
	I0314 00:46:30.552653   55561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:46:30.552706   55561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:46:30.570967   55561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0314 00:46:30.571450   55561 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:46:30.571980   55561 main.go:141] libmachine: Using API Version  1
	I0314 00:46:30.572001   55561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:46:30.572378   55561 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:46:30.572654   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:30.572887   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetState
	I0314 00:46:30.574866   55561 fix.go:112] recreateIfNeeded on kubernetes-upgrade-552430: state=Running err=<nil>
	W0314 00:46:30.574902   55561 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:46:30.578243   55561 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-552430" VM ...
	I0314 00:46:30.579683   55561 machine.go:94] provisionDockerMachine start ...
	I0314 00:46:30.579730   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:30.579960   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:30.583110   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.583544   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:30.583583   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.583716   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:30.583915   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.584088   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.584241   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:30.584449   55561 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:30.584706   55561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:46:30.584719   55561 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:46:30.683892   55561 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-552430
	
	I0314 00:46:30.683923   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:46:30.684197   55561 buildroot.go:166] provisioning hostname "kubernetes-upgrade-552430"
	I0314 00:46:30.684219   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:46:30.684356   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:30.687261   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.687608   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:30.687640   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.687812   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:30.688000   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.688162   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.688298   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:30.688477   55561 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:30.688755   55561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:46:30.688774   55561 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-552430 && echo "kubernetes-upgrade-552430" | sudo tee /etc/hostname
	I0314 00:46:25.843606   53605 node_ready.go:53] node "custom-flannel-326260" has status "Ready":"False"
	I0314 00:46:26.844516   53605 node_ready.go:49] node "custom-flannel-326260" has status "Ready":"True"
	I0314 00:46:26.844540   53605 node_ready.go:38] duration metric: took 8.004796306s for node "custom-flannel-326260" to be "Ready" ...
	I0314 00:46:26.844553   53605 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:46:26.851805   53605 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-g5swz" in "kube-system" namespace to be "Ready" ...
	I0314 00:46:28.859257   53605 pod_ready.go:102] pod "coredns-5dd5756b68-g5swz" in "kube-system" namespace has status "Ready":"False"
	I0314 00:46:28.881970   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:28.882479   54963 main.go:141] libmachine: (enable-default-cni-326260) Found IP for machine: 192.168.39.31
	I0314 00:46:28.882503   54963 main.go:141] libmachine: (enable-default-cni-326260) Reserving static IP address...
	I0314 00:46:28.882521   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has current primary IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:28.882964   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-326260", mac: "52:54:00:28:a1:1c", ip: "192.168.39.31"} in network mk-enable-default-cni-326260
	I0314 00:46:28.963145   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | Getting to WaitForSSH function...
	I0314 00:46:28.963172   54963 main.go:141] libmachine: (enable-default-cni-326260) Reserved static IP address: 192.168.39.31
	I0314 00:46:28.963189   54963 main.go:141] libmachine: (enable-default-cni-326260) Waiting for SSH to be available...
	I0314 00:46:28.965979   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:28.966402   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:28.966438   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:28.966554   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | Using SSH client type: external
	I0314 00:46:28.966585   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa (-rw-------)
	I0314 00:46:28.966617   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:46:28.966630   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | About to run SSH command:
	I0314 00:46:28.966643   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | exit 0
	I0314 00:46:29.099016   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | SSH cmd err, output: <nil>: 
	I0314 00:46:29.099328   54963 main.go:141] libmachine: (enable-default-cni-326260) KVM machine creation complete!
	I0314 00:46:29.099752   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetConfigRaw
	I0314 00:46:29.100325   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:29.100554   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:29.100737   54963 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 00:46:29.100752   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetState
	I0314 00:46:29.102234   54963 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 00:46:29.102251   54963 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 00:46:29.102258   54963 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 00:46:29.102268   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.105082   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.105440   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.105482   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.105643   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.105828   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.105982   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.106131   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.106316   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:29.106540   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:29.106559   54963 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 00:46:29.222307   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:46:29.222330   54963 main.go:141] libmachine: Detecting the provisioner...
	I0314 00:46:29.222339   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.225444   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.225851   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.225903   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.226081   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.226271   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.226453   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.226587   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.226754   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:29.227000   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:29.227034   54963 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 00:46:29.347918   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 00:46:29.347984   54963 main.go:141] libmachine: found compatible host: buildroot
	I0314 00:46:29.347994   54963 main.go:141] libmachine: Provisioning with buildroot...
	I0314 00:46:29.348011   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetMachineName
	I0314 00:46:29.348283   54963 buildroot.go:166] provisioning hostname "enable-default-cni-326260"
	I0314 00:46:29.348315   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetMachineName
	I0314 00:46:29.348527   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.351351   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.351768   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.351796   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.351925   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.352128   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.352288   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.352438   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.352585   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:29.352805   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:29.352824   54963 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-326260 && echo "enable-default-cni-326260" | sudo tee /etc/hostname
	I0314 00:46:29.490190   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-326260
	
	I0314 00:46:29.490218   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.493180   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.493544   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.493584   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.493789   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.493983   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.494132   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.494265   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.494433   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:29.494630   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:29.494666   54963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-326260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-326260/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-326260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:46:29.630117   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:46:29.630160   54963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:46:29.630229   54963 buildroot.go:174] setting up certificates
	I0314 00:46:29.630257   54963 provision.go:84] configureAuth start
	I0314 00:46:29.630281   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetMachineName
	I0314 00:46:29.630617   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetIP
	I0314 00:46:29.633309   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.633674   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.633693   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.633848   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.636060   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.636386   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.636409   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.636572   54963 provision.go:143] copyHostCerts
	I0314 00:46:29.636638   54963 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:46:29.636650   54963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:46:29.636737   54963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:46:29.636850   54963 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:46:29.636860   54963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:46:29.636886   54963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:46:29.636956   54963 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:46:29.636965   54963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:46:29.636994   54963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:46:29.637119   54963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-326260 san=[127.0.0.1 192.168.39.31 enable-default-cni-326260 localhost minikube]
	I0314 00:46:29.753752   54963 provision.go:177] copyRemoteCerts
	I0314 00:46:29.753817   54963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:46:29.753846   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.756891   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.757252   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.757285   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.757397   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.758985   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.759144   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.759296   54963 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa Username:docker}
	I0314 00:46:29.854735   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:46:29.888527   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:46:29.918103   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0314 00:46:29.949789   54963 provision.go:87] duration metric: took 319.509139ms to configureAuth
	I0314 00:46:29.949821   54963 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:46:29.950013   54963 config.go:182] Loaded profile config "enable-default-cni-326260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:46:29.950095   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:29.953034   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.953439   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:29.953498   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:29.953663   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:29.953868   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.954052   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:29.954222   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:29.954418   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:29.954631   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:29.954658   54963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:46:30.287204   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:46:30.287250   54963 main.go:141] libmachine: Checking connection to Docker...
	I0314 00:46:30.287262   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetURL
	I0314 00:46:30.288696   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | Using libvirt version 6000000
	I0314 00:46:30.290906   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.291232   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.291268   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.291462   54963 main.go:141] libmachine: Docker is up and running!
	I0314 00:46:30.291477   54963 main.go:141] libmachine: Reticulating splines...
	I0314 00:46:30.291484   54963 client.go:171] duration metric: took 24.52236369s to LocalClient.Create
	I0314 00:46:30.291512   54963 start.go:167] duration metric: took 24.522469283s to libmachine.API.Create "enable-default-cni-326260"
	I0314 00:46:30.291525   54963 start.go:293] postStartSetup for "enable-default-cni-326260" (driver="kvm2")
	I0314 00:46:30.291542   54963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:46:30.291566   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:30.291830   54963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:46:30.291872   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:30.294446   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.294846   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.294880   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.295088   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:30.295267   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:30.295454   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:30.295594   54963 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa Username:docker}
	I0314 00:46:30.386046   54963 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:46:30.391021   54963 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:46:30.391070   54963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:46:30.391151   54963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:46:30.391223   54963 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:46:30.391306   54963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:46:30.402233   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:46:30.429824   54963 start.go:296] duration metric: took 138.282551ms for postStartSetup
	I0314 00:46:30.429886   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetConfigRaw
	I0314 00:46:30.430475   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetIP
	I0314 00:46:30.433144   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.433439   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.433468   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.433686   54963 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/config.json ...
	I0314 00:46:30.433901   54963 start.go:128] duration metric: took 24.689286493s to createHost
	I0314 00:46:30.433928   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:30.436248   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.436583   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.436627   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.436746   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:30.436942   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:30.437115   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:30.437270   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:30.437454   54963 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:30.437636   54963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0314 00:46:30.437649   54963 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:46:30.551903   54963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377190.496895627
	
	I0314 00:46:30.551925   54963 fix.go:216] guest clock: 1710377190.496895627
	I0314 00:46:30.551932   54963 fix.go:229] Guest: 2024-03-14 00:46:30.496895627 +0000 UTC Remote: 2024-03-14 00:46:30.433915098 +0000 UTC m=+59.408440053 (delta=62.980529ms)
	I0314 00:46:30.551966   54963 fix.go:200] guest clock delta is within tolerance: 62.980529ms
	I0314 00:46:30.551971   54963 start.go:83] releasing machines lock for "enable-default-cni-326260", held for 24.807566054s
	I0314 00:46:30.551998   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:30.552275   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetIP
	I0314 00:46:30.555207   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.555602   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.555635   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.555809   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:30.556308   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:30.556490   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .DriverName
	I0314 00:46:30.556587   54963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:46:30.556627   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:30.556735   54963 ssh_runner.go:195] Run: cat /version.json
	I0314 00:46:30.556763   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHHostname
	I0314 00:46:30.559401   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.559561   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.559836   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.559872   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.559898   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:30.559938   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:30.560022   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:30.560291   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHPort
	I0314 00:46:30.560321   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:30.560503   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:30.560528   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHKeyPath
	I0314 00:46:30.560694   54963 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa Username:docker}
	I0314 00:46:30.560745   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetSSHUsername
	I0314 00:46:30.560887   54963 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/enable-default-cni-326260/id_rsa Username:docker}
	I0314 00:46:30.649487   54963 ssh_runner.go:195] Run: systemctl --version
	I0314 00:46:30.681125   54963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:46:30.854234   54963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:46:30.862272   54963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:46:30.862347   54963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:46:30.881642   54963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:46:30.881668   54963 start.go:494] detecting cgroup driver to use...
	I0314 00:46:30.881755   54963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:46:30.899644   54963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:46:30.915241   54963 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:46:30.915313   54963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:46:30.934097   54963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:46:30.949054   54963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:46:31.089970   54963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:46:31.262709   54963 docker.go:233] disabling docker service ...
	I0314 00:46:31.262798   54963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:46:31.279915   54963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:46:31.296409   54963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:46:31.434422   54963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:46:31.588402   54963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:46:31.607893   54963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:46:31.630685   54963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:46:31.630750   54963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:31.644092   54963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:46:31.644161   54963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:31.655780   54963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:31.669785   54963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:31.682835   54963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:46:31.697150   54963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:46:31.708276   54963 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:46:31.708356   54963 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:46:31.722924   54963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:46:31.735294   54963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:46:31.891298   54963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:46:32.047973   54963 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:46:32.048031   54963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:46:32.053401   54963 start.go:562] Will wait 60s for crictl version
	I0314 00:46:32.053463   54963 ssh_runner.go:195] Run: which crictl
	I0314 00:46:32.058100   54963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:46:32.103615   54963 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:46:32.103703   54963 ssh_runner.go:195] Run: crio --version
	I0314 00:46:32.137279   54963 ssh_runner.go:195] Run: crio --version
	I0314 00:46:32.176082   54963 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:46:30.810477   55561 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-552430
	
	I0314 00:46:30.810527   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:30.813078   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.813495   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:30.813534   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.813702   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:30.813895   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.814045   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:30.814195   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:30.814423   55561 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:30.814585   55561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:46:30.814601   55561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-552430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-552430/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-552430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:46:30.920767   55561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:46:30.920796   55561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:46:30.920815   55561 buildroot.go:174] setting up certificates
	I0314 00:46:30.920826   55561 provision.go:84] configureAuth start
	I0314 00:46:30.920838   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetMachineName
	I0314 00:46:30.921156   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:46:30.924602   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.925028   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:30.925058   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.925300   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:30.927967   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.928401   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:30.928444   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:30.928606   55561 provision.go:143] copyHostCerts
	I0314 00:46:30.928673   55561 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:46:30.928690   55561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:46:30.928759   55561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:46:30.928878   55561 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:46:30.928889   55561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:46:30.928920   55561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:46:30.929088   55561 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:46:30.929107   55561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:46:30.929171   55561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:46:30.929270   55561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-552430 san=[127.0.0.1 192.168.61.34 kubernetes-upgrade-552430 localhost minikube]
	I0314 00:46:31.096177   55561 provision.go:177] copyRemoteCerts
	I0314 00:46:31.096240   55561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:46:31.096268   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:31.099753   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:31.100158   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:31.100188   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:31.100379   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:31.100614   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:31.100811   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:31.100940   55561 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:46:31.186704   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:46:31.218571   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:46:31.256260   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0314 00:46:31.286675   55561 provision.go:87] duration metric: took 365.836084ms to configureAuth
	I0314 00:46:31.286704   55561 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:46:31.286946   55561 config.go:182] Loaded profile config "kubernetes-upgrade-552430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:46:31.287043   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:31.289937   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:31.290314   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:31.290357   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:31.290529   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:31.290731   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:31.290940   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:31.291155   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:31.291353   55561 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:31.291536   55561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:46:31.291560   55561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:46:32.350055   55561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:46:32.350087   55561 machine.go:97] duration metric: took 1.770369089s to provisionDockerMachine
	I0314 00:46:32.350104   55561 start.go:293] postStartSetup for "kubernetes-upgrade-552430" (driver="kvm2")
	I0314 00:46:32.350121   55561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:46:32.350147   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:32.350497   55561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:46:32.350544   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:32.353519   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.353925   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:32.353981   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.354302   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:32.354487   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:32.354628   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:32.354756   55561 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:46:32.520384   55561 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:46:32.540787   55561 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:46:32.540819   55561 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:46:32.540888   55561 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:46:32.540992   55561 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:46:32.541081   55561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:46:32.593814   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:46:32.680218   55561 start.go:296] duration metric: took 330.098831ms for postStartSetup
	I0314 00:46:32.680265   55561 fix.go:56] duration metric: took 2.128136478s for fixHost
	I0314 00:46:32.680288   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:32.683605   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.684038   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:32.684070   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.684438   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:32.684662   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:32.684831   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:32.684982   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:32.685180   55561 main.go:141] libmachine: Using SSH client type: native
	I0314 00:46:32.685411   55561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0314 00:46:32.685431   55561 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:46:32.971234   55561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377192.959916096
	
	I0314 00:46:32.971260   55561 fix.go:216] guest clock: 1710377192.959916096
	I0314 00:46:32.971270   55561 fix.go:229] Guest: 2024-03-14 00:46:32.959916096 +0000 UTC Remote: 2024-03-14 00:46:32.68027037 +0000 UTC m=+6.998931425 (delta=279.645726ms)
	I0314 00:46:32.971295   55561 fix.go:200] guest clock delta is within tolerance: 279.645726ms
	I0314 00:46:32.971301   55561 start.go:83] releasing machines lock for "kubernetes-upgrade-552430", held for 2.419198789s
	I0314 00:46:32.971325   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:32.971660   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:46:32.974951   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.975460   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:32.975497   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.975882   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:32.977078   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:32.977337   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .DriverName
	I0314 00:46:32.977461   55561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:46:32.977509   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:32.977608   55561 ssh_runner.go:195] Run: cat /version.json
	I0314 00:46:32.977624   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHHostname
	I0314 00:46:32.981079   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.981533   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.982015   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:32.982038   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.982080   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:32.982094   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:32.982282   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:32.982502   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:32.982553   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHPort
	I0314 00:46:32.982759   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:32.982820   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHKeyPath
	I0314 00:46:32.982953   55561 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:46:32.982971   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetSSHUsername
	I0314 00:46:32.983142   55561 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/kubernetes-upgrade-552430/id_rsa Username:docker}
	I0314 00:46:33.132368   55561 ssh_runner.go:195] Run: systemctl --version
	I0314 00:46:33.175005   55561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:46:33.428888   55561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:46:33.469149   55561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:46:33.469233   55561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:46:33.514397   55561 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 00:46:33.514424   55561 start.go:494] detecting cgroup driver to use...
	I0314 00:46:33.514477   55561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:46:33.563879   55561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:46:33.616213   55561 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:46:33.616288   55561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:46:33.673925   55561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:46:33.697256   55561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:46:33.980456   55561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:46:34.219477   55561 docker.go:233] disabling docker service ...
	I0314 00:46:34.219552   55561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:46:34.241518   55561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:46:34.258436   55561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:46:34.476341   55561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:46:34.697695   55561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:46:34.719764   55561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:46:34.752302   55561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:46:34.752376   55561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:34.766904   55561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:46:34.766980   55561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:34.781822   55561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:34.794435   55561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:46:34.809328   55561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:46:34.830629   55561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:46:34.857856   55561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:46:34.893702   55561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:46:35.093274   55561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:46:35.603838   55561 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:46:35.603926   55561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:46:35.611281   55561 start.go:562] Will wait 60s for crictl version
	I0314 00:46:35.611354   55561 ssh_runner.go:195] Run: which crictl
	I0314 00:46:35.628149   55561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:46:30.863185   53605 pod_ready.go:102] pod "coredns-5dd5756b68-g5swz" in "kube-system" namespace has status "Ready":"False"
	I0314 00:46:33.371381   53605 pod_ready.go:102] pod "coredns-5dd5756b68-g5swz" in "kube-system" namespace has status "Ready":"False"
	I0314 00:46:32.177487   54963 main.go:141] libmachine: (enable-default-cni-326260) Calling .GetIP
	I0314 00:46:32.180665   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:32.181059   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:a1:1c", ip: ""} in network mk-enable-default-cni-326260: {Iface:virbr4 ExpiryTime:2024-03-14 01:46:22 +0000 UTC Type:0 Mac:52:54:00:28:a1:1c Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:enable-default-cni-326260 Clientid:01:52:54:00:28:a1:1c}
	I0314 00:46:32.181085   54963 main.go:141] libmachine: (enable-default-cni-326260) DBG | domain enable-default-cni-326260 has defined IP address 192.168.39.31 and MAC address 52:54:00:28:a1:1c in network mk-enable-default-cni-326260
	I0314 00:46:32.181279   54963 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:46:32.186360   54963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:46:32.200645   54963 kubeadm.go:877] updating cluster {Name:enable-default-cni-326260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:enable-default-cni-326260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:46:32.200777   54963 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:46:32.200842   54963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:46:32.241749   54963 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:46:32.241818   54963 ssh_runner.go:195] Run: which lz4
	I0314 00:46:32.246738   54963 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:46:32.251802   54963 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:46:32.251834   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:46:34.121285   54963 crio.go:444] duration metric: took 1.874569346s to copy over tarball
	I0314 00:46:34.121350   54963 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:46:36.002249   55561 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:46:36.002338   55561 ssh_runner.go:195] Run: crio --version
	I0314 00:46:36.071398   55561 ssh_runner.go:195] Run: crio --version
	I0314 00:46:36.118222   55561 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:46:37.417369   54963 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.295991465s)
	I0314 00:46:37.417402   54963 crio.go:451] duration metric: took 3.296091141s to extract the tarball
	I0314 00:46:37.417410   54963 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:46:37.476352   54963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:46:37.526268   54963 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:46:37.526300   54963 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:46:37.526310   54963 kubeadm.go:928] updating node { 192.168.39.31 8443 v1.28.4 crio true true} ...
	I0314 00:46:37.526429   54963 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-326260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-326260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0314 00:46:37.526515   54963 ssh_runner.go:195] Run: crio config
	I0314 00:46:37.575069   54963 cni.go:84] Creating CNI manager for "bridge"
	I0314 00:46:37.575099   54963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:46:37.575130   54963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-326260 NodeName:enable-default-cni-326260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:46:37.575318   54963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-326260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:46:37.575401   54963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:46:37.588237   54963 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:46:37.588305   54963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:46:37.600839   54963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0314 00:46:37.621334   54963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:46:37.643556   54963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:46:37.664172   54963 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I0314 00:46:37.668458   54963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:46:37.685627   54963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:46:37.832429   54963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:46:37.852817   54963 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260 for IP: 192.168.39.31
	I0314 00:46:37.852841   54963 certs.go:194] generating shared ca certs ...
	I0314 00:46:37.852864   54963 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:37.853076   54963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:46:37.853163   54963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:46:37.853186   54963 certs.go:256] generating profile certs ...
	I0314 00:46:37.853264   54963 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.key
	I0314 00:46:37.853285   54963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt with IP's: []
	I0314 00:46:38.067578   54963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt ...
	I0314 00:46:38.067606   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: {Name:mkd3419f05fef25e8369f48032eda0b667734ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.070929   54963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.key ...
	I0314 00:46:38.070960   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.key: {Name:mke506047eef23ffe29bcd3aff2cde86889f281f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.090109   54963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key.58fd9a89
	I0314 00:46:38.090161   54963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt.58fd9a89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31]
	I0314 00:46:38.209365   54963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt.58fd9a89 ...
	I0314 00:46:38.209392   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt.58fd9a89: {Name:mk6bc3168ba5d2f3d9f4e0113a29c2674ea1e954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.209544   54963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key.58fd9a89 ...
	I0314 00:46:38.209556   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key.58fd9a89: {Name:mk53389c66a36f45007e056005cf7af9853adbf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.209620   54963 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt.58fd9a89 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt
	I0314 00:46:38.209710   54963 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key.58fd9a89 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key
	I0314 00:46:38.209763   54963 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.key
	I0314 00:46:38.209778   54963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.crt with IP's: []
	I0314 00:46:38.382274   54963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.crt ...
	I0314 00:46:38.382312   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.crt: {Name:mkcc7919c7483159da4db0a6c13135a6a6f7850c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.382472   54963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.key ...
	I0314 00:46:38.382484   54963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.key: {Name:mk2f630eab3cb55d16bf3c039e686dbc63696e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:38.382683   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:46:38.382733   54963 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:46:38.382749   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:46:38.382804   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:46:38.382836   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:46:38.382864   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:46:38.382904   54963 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:46:38.383461   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:46:38.414398   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:46:38.442560   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:46:38.469898   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:46:38.496291   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 00:46:38.520958   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:46:38.552084   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:46:38.621160   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:46:38.653459   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:46:38.682033   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:46:38.711899   54963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:46:38.743471   54963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:46:38.761142   54963 ssh_runner.go:195] Run: openssl version
	I0314 00:46:38.767128   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:46:38.779263   54963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:46:38.784618   54963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:46:38.784681   54963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:46:38.791155   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:46:38.804534   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:46:38.816531   54963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:38.821765   54963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:38.821826   54963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:38.827984   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:46:38.840236   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:46:38.853057   54963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:46:38.859323   54963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:46:38.859387   54963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:46:38.867117   54963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:46:38.879373   54963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:46:38.883889   54963 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 00:46:38.883945   54963 kubeadm.go:391] StartCluster: {Name:enable-default-cni-326260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:enable-default-cni-326260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:46:38.884044   54963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:46:38.884099   54963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:46:38.930170   54963 cri.go:89] found id: ""
	I0314 00:46:38.930239   54963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 00:46:38.942456   54963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:46:38.954434   54963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:46:38.967435   54963 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:46:38.967457   54963 kubeadm.go:156] found existing configuration files:
	
	I0314 00:46:38.967507   54963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:46:38.981450   54963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:46:38.981513   54963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:46:38.996177   54963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:46:39.010149   54963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:46:39.010209   54963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:46:39.025282   54963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:46:39.039306   54963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:46:39.039367   54963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:46:39.054856   54963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:46:39.068470   54963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:46:39.068535   54963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:46:39.082979   54963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 00:46:39.149898   54963 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 00:46:39.149973   54963 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:46:39.327046   54963 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:46:39.327198   54963 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:46:39.327320   54963 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:46:39.655689   54963 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:46:36.120266   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) Calling .GetIP
	I0314 00:46:36.123384   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:36.123771   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:12", ip: ""} in network mk-kubernetes-upgrade-552430: {Iface:virbr3 ExpiryTime:2024-03-14 01:41:01 +0000 UTC Type:0 Mac:52:54:00:eb:57:12 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-552430 Clientid:01:52:54:00:eb:57:12}
	I0314 00:46:36.123806   55561 main.go:141] libmachine: (kubernetes-upgrade-552430) DBG | domain kubernetes-upgrade-552430 has defined IP address 192.168.61.34 and MAC address 52:54:00:eb:57:12 in network mk-kubernetes-upgrade-552430
	I0314 00:46:36.124064   55561 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:46:36.130517   55561 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:46:36.130675   55561 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:46:36.130741   55561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:46:36.206485   55561 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:46:36.206509   55561 crio.go:415] Images already preloaded, skipping extraction
	I0314 00:46:36.206579   55561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:46:36.258593   55561 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:46:36.258619   55561 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:46:36.258627   55561 kubeadm.go:928] updating node { 192.168.61.34 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:46:36.258783   55561 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-552430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:46:36.258870   55561 ssh_runner.go:195] Run: crio config
	I0314 00:46:36.331583   55561 cni.go:84] Creating CNI manager for ""
	I0314 00:46:36.331606   55561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:46:36.331618   55561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:46:36.331638   55561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-552430 NodeName:kubernetes-upgrade-552430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:46:36.331819   55561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-552430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:46:36.331896   55561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:46:36.348111   55561 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:46:36.348197   55561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:46:36.363068   55561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0314 00:46:36.390171   55561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:46:36.412997   55561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0314 00:46:36.437938   55561 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0314 00:46:36.443538   55561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:46:36.614480   55561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:46:36.639259   55561 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430 for IP: 192.168.61.34
	I0314 00:46:36.639286   55561 certs.go:194] generating shared ca certs ...
	I0314 00:46:36.639306   55561 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:46:36.639487   55561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:46:36.639546   55561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:46:36.639557   55561 certs.go:256] generating profile certs ...
	I0314 00:46:36.639683   55561 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/client.key
	I0314 00:46:36.639739   55561 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key.bdc00de9
	I0314 00:46:36.639787   55561 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key
	I0314 00:46:36.639932   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:46:36.639976   55561 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:46:36.639989   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:46:36.640030   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:46:36.640065   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:46:36.640103   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:46:36.640158   55561 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:46:36.640987   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:46:36.676716   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:46:36.754944   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:46:36.788412   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:46:36.824096   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 00:46:36.863354   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:46:36.911131   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:46:36.951289   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kubernetes-upgrade-552430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:46:36.992429   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:46:37.030373   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:46:37.067439   55561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:46:37.106044   55561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:46:37.133523   55561 ssh_runner.go:195] Run: openssl version
	I0314 00:46:37.141683   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:46:37.158868   55561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:46:37.164945   55561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:46:37.165017   55561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:46:37.172315   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:46:37.185524   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:46:37.201861   55561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:37.208385   55561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:37.208442   55561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:46:37.215957   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:46:37.230277   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:46:37.244836   55561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:46:37.251907   55561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:46:37.251981   55561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:46:37.260721   55561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:46:37.274290   55561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:46:37.280322   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:46:37.287276   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:46:37.297803   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:46:37.307730   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:46:37.324458   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:46:37.333972   55561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:46:37.344240   55561 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-552430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-552430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:46:37.344357   55561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:46:37.344413   55561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:46:37.400945   55561 cri.go:89] found id: "9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807"
	I0314 00:46:37.400971   55561 cri.go:89] found id: "b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f"
	I0314 00:46:37.400977   55561 cri.go:89] found id: "626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74"
	I0314 00:46:37.400982   55561 cri.go:89] found id: "83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9"
	I0314 00:46:37.400997   55561 cri.go:89] found id: ""
	I0314 00:46:37.401054   55561 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.067339353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710377209067282690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0494a402-dc4a-4bd7-a47d-e5a06cc44a28 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.068152147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17b3eeea-5c1f-466b-9068-3fade8158edb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.068258336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17b3eeea-5c1f-466b-9068-3fade8158edb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.068712434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3c843321a9e47a42151733bb65c4c34d61ae4073ad1c2c4647d37b051f272a,PodSandboxId:49087fa9274a136973ae5681c0855a4b8ac753bb56badaaa2f8e0c4663387810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377201083949957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc406e098f85d092f21d7202818adec5be61ed8856c761e06663b8ebdf9e14f,PodSandboxId:6bed9c628a6eb3d70f709091a3c0754f9f6d088ecf9ba756926e834f8e63682f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377201076668825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f75b6cab764623fa089d1fc1061c30971e1dbcd6684ede46e2814dff3b5242,PodSandboxId:044411a6a4ec66bafa8e02f3a26213fd88830dd0dc9464f6214c01a8e2a3dbaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377201026506384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9475658694ca67fdffd5b6b7e820efb91d42f0fa5d6c5d4a16677919c06aa141,PodSandboxId:88c9c7cd9d0185565c4c0d3df6ac761bdddfae34c7e68531609efef53c2b469f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377201040812172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f,PodSandboxId:fe022442dce0c3638ab37f6a6d201ae57421bfc710c21a6b6949ee504971441b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710377192890400041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807,PodSandboxId:8835619127e7b9418cf8c5c1a3998a3250b8e53afc0377ffb6dadadedf1aa9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710377192924584146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74,PodSandboxId:33bd3038425b174b57f4daf1b6a36ddd12816c52ae1b0f451684b0badaa87e96,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710377192842254904,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9,PodSandboxId:0238c6e4bc3f1a9a2e6c854d6c490c7dee5420c2a0094313cdc8168157f90ce3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710377192812377711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17b3eeea-5c1f-466b-9068-3fade8158edb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.129802883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60fa3f90-f409-4093-93c6-e37557cf3335 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.130240641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60fa3f90-f409-4093-93c6-e37557cf3335 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.132425541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3837fd5-d247-479d-ae88-010ee5acea36 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.132977465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710377209132942644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3837fd5-d247-479d-ae88-010ee5acea36 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.135285673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4973ed66-e435-4cbb-b6cb-f19dcddd359e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.135360388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4973ed66-e435-4cbb-b6cb-f19dcddd359e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.135640683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3c843321a9e47a42151733bb65c4c34d61ae4073ad1c2c4647d37b051f272a,PodSandboxId:49087fa9274a136973ae5681c0855a4b8ac753bb56badaaa2f8e0c4663387810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377201083949957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc406e098f85d092f21d7202818adec5be61ed8856c761e06663b8ebdf9e14f,PodSandboxId:6bed9c628a6eb3d70f709091a3c0754f9f6d088ecf9ba756926e834f8e63682f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377201076668825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f75b6cab764623fa089d1fc1061c30971e1dbcd6684ede46e2814dff3b5242,PodSandboxId:044411a6a4ec66bafa8e02f3a26213fd88830dd0dc9464f6214c01a8e2a3dbaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377201026506384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9475658694ca67fdffd5b6b7e820efb91d42f0fa5d6c5d4a16677919c06aa141,PodSandboxId:88c9c7cd9d0185565c4c0d3df6ac761bdddfae34c7e68531609efef53c2b469f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377201040812172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f,PodSandboxId:fe022442dce0c3638ab37f6a6d201ae57421bfc710c21a6b6949ee504971441b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710377192890400041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807,PodSandboxId:8835619127e7b9418cf8c5c1a3998a3250b8e53afc0377ffb6dadadedf1aa9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710377192924584146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74,PodSandboxId:33bd3038425b174b57f4daf1b6a36ddd12816c52ae1b0f451684b0badaa87e96,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710377192842254904,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9,PodSandboxId:0238c6e4bc3f1a9a2e6c854d6c490c7dee5420c2a0094313cdc8168157f90ce3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710377192812377711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4973ed66-e435-4cbb-b6cb-f19dcddd359e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.222649868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fdde2be-86d0-45db-8f13-17f7de36513d name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.222783946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fdde2be-86d0-45db-8f13-17f7de36513d name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.224406066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e46b9c62-176d-4813-8091-9a4cdd7f1640 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.225165579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710377209225127487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e46b9c62-176d-4813-8091-9a4cdd7f1640 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.226665153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fb4c1bc-f346-43c5-83e2-cdd54d9e9a1b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.226892159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fb4c1bc-f346-43c5-83e2-cdd54d9e9a1b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.227351470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3c843321a9e47a42151733bb65c4c34d61ae4073ad1c2c4647d37b051f272a,PodSandboxId:49087fa9274a136973ae5681c0855a4b8ac753bb56badaaa2f8e0c4663387810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377201083949957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc406e098f85d092f21d7202818adec5be61ed8856c761e06663b8ebdf9e14f,PodSandboxId:6bed9c628a6eb3d70f709091a3c0754f9f6d088ecf9ba756926e834f8e63682f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377201076668825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f75b6cab764623fa089d1fc1061c30971e1dbcd6684ede46e2814dff3b5242,PodSandboxId:044411a6a4ec66bafa8e02f3a26213fd88830dd0dc9464f6214c01a8e2a3dbaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377201026506384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9475658694ca67fdffd5b6b7e820efb91d42f0fa5d6c5d4a16677919c06aa141,PodSandboxId:88c9c7cd9d0185565c4c0d3df6ac761bdddfae34c7e68531609efef53c2b469f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377201040812172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f,PodSandboxId:fe022442dce0c3638ab37f6a6d201ae57421bfc710c21a6b6949ee504971441b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710377192890400041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807,PodSandboxId:8835619127e7b9418cf8c5c1a3998a3250b8e53afc0377ffb6dadadedf1aa9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710377192924584146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74,PodSandboxId:33bd3038425b174b57f4daf1b6a36ddd12816c52ae1b0f451684b0badaa87e96,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710377192842254904,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9,PodSandboxId:0238c6e4bc3f1a9a2e6c854d6c490c7dee5420c2a0094313cdc8168157f90ce3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710377192812377711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fb4c1bc-f346-43c5-83e2-cdd54d9e9a1b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.291872219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=caf5603f-a8b6-47bd-a367-33290a128aaf name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.291978550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=caf5603f-a8b6-47bd-a367-33290a128aaf name=/runtime.v1.RuntimeService/Version
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.297324196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3a50041-0bed-47cf-89e2-8bc21e8b2120 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.297871828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710377209297837414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3a50041-0bed-47cf-89e2-8bc21e8b2120 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.300371392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a604b97c-12c9-46f4-8059-d46b5ef92540 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.300450881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a604b97c-12c9-46f4-8059-d46b5ef92540 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:46:49 kubernetes-upgrade-552430 crio[1849]: time="2024-03-14 00:46:49.300761632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3c843321a9e47a42151733bb65c4c34d61ae4073ad1c2c4647d37b051f272a,PodSandboxId:49087fa9274a136973ae5681c0855a4b8ac753bb56badaaa2f8e0c4663387810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377201083949957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc406e098f85d092f21d7202818adec5be61ed8856c761e06663b8ebdf9e14f,PodSandboxId:6bed9c628a6eb3d70f709091a3c0754f9f6d088ecf9ba756926e834f8e63682f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377201076668825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f75b6cab764623fa089d1fc1061c30971e1dbcd6684ede46e2814dff3b5242,PodSandboxId:044411a6a4ec66bafa8e02f3a26213fd88830dd0dc9464f6214c01a8e2a3dbaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377201026506384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9475658694ca67fdffd5b6b7e820efb91d42f0fa5d6c5d4a16677919c06aa141,PodSandboxId:88c9c7cd9d0185565c4c0d3df6ac761bdddfae34c7e68531609efef53c2b469f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377201040812172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f,PodSandboxId:fe022442dce0c3638ab37f6a6d201ae57421bfc710c21a6b6949ee504971441b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710377192890400041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15d8b7421373280f42f6a9099230430,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807,PodSandboxId:8835619127e7b9418cf8c5c1a3998a3250b8e53afc0377ffb6dadadedf1aa9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710377192924584146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec4ca548727b6ef84c1ff50bcfed066,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74,PodSandboxId:33bd3038425b174b57f4daf1b6a36ddd12816c52ae1b0f451684b0badaa87e96,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710377192842254904,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e208f2e84a7a8c2b7ba4408477b4151b,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6bfbaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9,PodSandboxId:0238c6e4bc3f1a9a2e6c854d6c490c7dee5420c2a0094313cdc8168157f90ce3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710377192812377711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-552430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99468a4d816f0760822a2e5e04a17f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 736fcb14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a604b97c-12c9-46f4-8059-d46b5ef92540 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c3c843321a9e       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   8 seconds ago       Running             kube-scheduler            2                   49087fa9274a1       kube-scheduler-kubernetes-upgrade-552430
	9fc406e098f85       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   8 seconds ago       Running             kube-controller-manager   2                   6bed9c628a6eb       kube-controller-manager-kubernetes-upgrade-552430
	9475658694ca6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   8 seconds ago       Running             kube-apiserver            2                   88c9c7cd9d018       kube-apiserver-kubernetes-upgrade-552430
	26f75b6cab764       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   8 seconds ago       Running             etcd                      2                   044411a6a4ec6       etcd-kubernetes-upgrade-552430
	9f294bcf08c0f       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   16 seconds ago      Exited              kube-controller-manager   1                   8835619127e7b       kube-controller-manager-kubernetes-upgrade-552430
	b8d0b45f648e4       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   16 seconds ago      Exited              kube-scheduler            1                   fe022442dce0c       kube-scheduler-kubernetes-upgrade-552430
	626436bd8baa1       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   16 seconds ago      Exited              etcd                      1                   33bd3038425b1       etcd-kubernetes-upgrade-552430
	83f9bcde6c185       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   16 seconds ago      Exited              kube-apiserver            1                   0238c6e4bc3f1       kube-apiserver-kubernetes-upgrade-552430
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-552430
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-552430
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:46:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-552430
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:46:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:46:44 +0000   Thu, 14 Mar 2024 00:46:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:46:44 +0000   Thu, 14 Mar 2024 00:46:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:46:44 +0000   Thu, 14 Mar 2024 00:46:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:46:44 +0000   Thu, 14 Mar 2024 00:46:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.34
	  Hostname:    kubernetes-upgrade-552430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 adcc9351b59e4d92ae6ac423ee3ffb57
	  System UUID:                adcc9351-b59e-4d92-ae6a-c423ee3ffb57
	  Boot ID:                    3553542c-c686-4e8d-ab4d-0b7baa56ed09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-552430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 kube-apiserver-kubernetes-upgrade-552430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-552430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-552430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-552430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.696167] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 00:46] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.064345] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.085996] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.214514] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.151598] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.264558] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +4.980139] systemd-fstab-generator[728]: Ignoring "noauto" option for root device
	[  +0.071453] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.471468] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[ +10.127736] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.110624] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.864839] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.177186] systemd-fstab-generator[1770]: Ignoring "noauto" option for root device
	[  +0.281902] systemd-fstab-generator[1782]: Ignoring "noauto" option for root device
	[  +0.254687] systemd-fstab-generator[1798]: Ignoring "noauto" option for root device
	[  +0.246749] systemd-fstab-generator[1810]: Ignoring "noauto" option for root device
	[  +0.379548] systemd-fstab-generator[1834]: Ignoring "noauto" option for root device
	[  +1.551059] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +3.760395] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.089972] kauditd_printk_skb: 186 callbacks suppressed
	[  +6.426862] systemd-fstab-generator[2564]: Ignoring "noauto" option for root device
	[  +0.146627] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [26f75b6cab764623fa089d1fc1061c30971e1dbcd6684ede46e2814dff3b5242] <==
	{"level":"info","ts":"2024-03-14T00:46:41.623465Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:46:41.62348Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:46:41.623804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b switched to configuration voters=(9659354804491947931)"}
	{"level":"info","ts":"2024-03-14T00:46:41.627093Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b988ca96e7ba1f2","local-member-id":"860cec0469348f9b","added-peer-id":"860cec0469348f9b","added-peer-peer-urls":["https://192.168.61.34:2380"]}
	{"level":"info","ts":"2024-03-14T00:46:41.62724Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b988ca96e7ba1f2","local-member-id":"860cec0469348f9b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:46:41.627332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:46:41.638283Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-03-14T00:46:41.638333Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-03-14T00:46:41.638413Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:46:41.645647Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T00:46:41.645967Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"860cec0469348f9b","initial-advertise-peer-urls":["https://192.168.61.34:2380"],"listen-peer-urls":["https://192.168.61.34:2380"],"advertise-client-urls":["https://192.168.61.34:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.34:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:46:42.554111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-14T00:46:42.554202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:46:42.554234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgPreVoteResp from 860cec0469348f9b at term 3"}
	{"level":"info","ts":"2024-03-14T00:46:42.554249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became candidate at term 4"}
	{"level":"info","ts":"2024-03-14T00:46:42.554255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgVoteResp from 860cec0469348f9b at term 4"}
	{"level":"info","ts":"2024-03-14T00:46:42.554263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became leader at term 4"}
	{"level":"info","ts":"2024-03-14T00:46:42.554271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 860cec0469348f9b elected leader 860cec0469348f9b at term 4"}
	{"level":"info","ts":"2024-03-14T00:46:42.567301Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"860cec0469348f9b","local-member-attributes":"{Name:kubernetes-upgrade-552430 ClientURLs:[https://192.168.61.34:2379]}","request-path":"/0/members/860cec0469348f9b/attributes","cluster-id":"3b988ca96e7ba1f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:46:42.567365Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:46:42.567696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:46:42.576461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.34:2379"}
	{"level":"info","ts":"2024-03-14T00:46:42.585248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:46:42.585435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:46:42.585497Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74] <==
	{"level":"info","ts":"2024-03-14T00:46:35.085203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:46:35.085213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 860cec0469348f9b elected leader 860cec0469348f9b at term 3"}
	{"level":"info","ts":"2024-03-14T00:46:35.091503Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"860cec0469348f9b","local-member-attributes":"{Name:kubernetes-upgrade-552430 ClientURLs:[https://192.168.61.34:2379]}","request-path":"/0/members/860cec0469348f9b/attributes","cluster-id":"3b988ca96e7ba1f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:46:35.091772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:46:35.091831Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:46:35.091852Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T00:46:35.091915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:46:35.096205Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.34:2379"}
	{"level":"info","ts":"2024-03-14T00:46:35.11174Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T00:46:35.111871Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-552430","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.34:2380"],"advertise-client-urls":["https://192.168.61.34:2379"]}
	{"level":"info","ts":"2024-03-14T00:46:35.112801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-14T00:46:35.128389Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39892","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39892: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.128435Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39852","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39852: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.128445Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39856","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39856: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.130144Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39870","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39870: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.13023Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.130271Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.130316Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.34:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T00:46:35.130323Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.34:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T00:46:35.130358Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"860cec0469348f9b","current-leader-member-id":"860cec0469348f9b"}
	2024/03/14 00:46:35 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	2024/03/14 00:46:35 WARNING: [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	{"level":"info","ts":"2024-03-14T00:46:35.143542Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-03-14T00:46:35.143898Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-03-14T00:46:35.143987Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-552430","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.34:2380"],"advertise-client-urls":["https://192.168.61.34:2379"]}
	
	
	==> kernel <==
	 00:46:49 up 0 min,  0 users,  load average: 1.76, 0.49, 0.17
	Linux kubernetes-upgrade-552430 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9] <==
	I0314 00:46:33.381885       1 options.go:222] external host was not specified, using 192.168.61.34
	I0314 00:46:33.385693       1 server.go:148] Version: v1.29.0-rc.2
	I0314 00:46:33.385849       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:46:34.810927       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 00:46:34.821062       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 00:46:34.821138       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 00:46:34.821380       1 instance.go:297] Using reconciler: lease
	W0314 00:46:35.131802       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W0314 00:46:35.131984       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "error reading server preface: read tcp 127.0.0.1:39856->127.0.0.1:2379: read: connection reset by peer"
	W0314 00:46:35.132406       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: failed to write client preface: write tcp 127.0.0.1:39852->127.0.0.1:2379: write: broken pipe"
	
	
	==> kube-apiserver [9475658694ca67fdffd5b6b7e820efb91d42f0fa5d6c5d4a16677919c06aa141] <==
	I0314 00:46:44.486232       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0314 00:46:44.486385       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0314 00:46:44.486645       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0314 00:46:44.486736       1 available_controller.go:423] Starting AvailableConditionController
	I0314 00:46:44.486851       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0314 00:46:44.673337       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 00:46:44.684274       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 00:46:44.689169       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 00:46:44.702110       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 00:46:44.712720       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 00:46:44.713515       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 00:46:44.718087       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 00:46:44.718110       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0314 00:46:44.718463       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0314 00:46:44.721957       1 aggregator.go:165] initial CRD sync complete...
	I0314 00:46:44.726096       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 00:46:44.726187       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 00:46:44.726214       1 cache.go:39] Caches are synced for autoregister controller
	E0314 00:46:44.746671       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0314 00:46:45.488531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 00:46:46.433611       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 00:46:46.447261       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 00:46:46.502294       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 00:46:46.577691       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 00:46:46.586205       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807] <==
	
	
	==> kube-controller-manager [9fc406e098f85d092f21d7202818adec5be61ed8856c761e06663b8ebdf9e14f] <==
	I0314 00:46:49.255722       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0314 00:46:49.255830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0314 00:46:49.255846       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0314 00:46:49.397154       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0314 00:46:49.397266       1 gc_controller.go:101] "Starting GC controller"
	I0314 00:46:49.397279       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0314 00:46:49.548117       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0314 00:46:49.548381       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0314 00:46:49.548867       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0314 00:46:49.586873       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 00:46:49.586971       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 00:46:49.641683       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0314 00:46:49.641779       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0314 00:46:49.641889       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0314 00:46:49.641937       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0314 00:46:49.641975       1 shared_informer.go:311] Waiting for caches to sync for taint
	E0314 00:46:49.687752       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0314 00:46:49.687805       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0314 00:46:49.839154       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0314 00:46:49.839190       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0314 00:46:49.839286       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 00:46:49.839293       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 00:46:49.988957       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0314 00:46:49.989194       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0314 00:46:49.989236       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	
	
	==> kube-scheduler [2c3c843321a9e47a42151733bb65c4c34d61ae4073ad1c2c4647d37b051f272a] <==
	I0314 00:46:42.266003       1 serving.go:380] Generated self-signed cert in-memory
	W0314 00:46:44.591770       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:46:44.591951       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:46:44.591967       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:46:44.591977       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:46:44.673200       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 00:46:44.674535       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:46:44.677928       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:46:44.678080       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:46:44.678132       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:46:44.678571       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:46:44.779118       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f] <==
	
	
	==> kubelet <==
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.734905    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e208f2e84a7a8c2b7ba4408477b4151b-etcd-data\") pod \"etcd-kubernetes-upgrade-552430\" (UID: \"e208f2e84a7a8c2b7ba4408477b4151b\") " pod="kube-system/etcd-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.734961    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99468a4d816f0760822a2e5e04a17f4f-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-552430\" (UID: \"99468a4d816f0760822a2e5e04a17f4f\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.734989    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99468a4d816f0760822a2e5e04a17f4f-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-552430\" (UID: \"99468a4d816f0760822a2e5e04a17f4f\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735055    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dec4ca548727b6ef84c1ff50bcfed066-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-552430\" (UID: \"dec4ca548727b6ef84c1ff50bcfed066\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735083    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dec4ca548727b6ef84c1ff50bcfed066-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-552430\" (UID: \"dec4ca548727b6ef84c1ff50bcfed066\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735140    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f15d8b7421373280f42f6a9099230430-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-552430\" (UID: \"f15d8b7421373280f42f6a9099230430\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735166    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e208f2e84a7a8c2b7ba4408477b4151b-etcd-certs\") pod \"etcd-kubernetes-upgrade-552430\" (UID: \"e208f2e84a7a8c2b7ba4408477b4151b\") " pod="kube-system/etcd-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735204    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99468a4d816f0760822a2e5e04a17f4f-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-552430\" (UID: \"99468a4d816f0760822a2e5e04a17f4f\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735229    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dec4ca548727b6ef84c1ff50bcfed066-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-552430\" (UID: \"dec4ca548727b6ef84c1ff50bcfed066\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735259    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dec4ca548727b6ef84c1ff50bcfed066-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-552430\" (UID: \"dec4ca548727b6ef84c1ff50bcfed066\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.735279    2295 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dec4ca548727b6ef84c1ff50bcfed066-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-552430\" (UID: \"dec4ca548727b6ef84c1ff50bcfed066\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:40.837564    2295 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-552430"
	Mar 14 00:46:40 kubernetes-upgrade-552430 kubelet[2295]: E0314 00:46:40.838495    2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.34:8443: connect: connection refused" node="kubernetes-upgrade-552430"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:41.005633    2295 scope.go:117] "RemoveContainer" containerID="b8d0b45f648e4ce4ca30ad7e163aa9b1ac459e381f267c3825b47c75d92ae85f"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:41.006870    2295 scope.go:117] "RemoveContainer" containerID="626436bd8baa10d4b158bca65269a473d02662a9ae4fd54a08f6da0ed7b05d74"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:41.008940    2295 scope.go:117] "RemoveContainer" containerID="83f9bcde6c185ddacfdefd17dcb4f859f47649fd62df45a23f9e0c99f44b3cf9"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:41.010079    2295 scope.go:117] "RemoveContainer" containerID="9f294bcf08c0fabfc43f0b7a54d4153484c99215e505abd902917671709e0807"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: E0314 00:46:41.155413    2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-552430?timeout=10s\": dial tcp 192.168.61.34:8443: connect: connection refused" interval="800ms"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:41.242959    2295 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-552430"
	Mar 14 00:46:41 kubernetes-upgrade-552430 kubelet[2295]: E0314 00:46:41.244572    2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.34:8443: connect: connection refused" node="kubernetes-upgrade-552430"
	Mar 14 00:46:42 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:42.046259    2295 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-552430"
	Mar 14 00:46:44 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:44.715529    2295 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-552430"
	Mar 14 00:46:44 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:44.716096    2295 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-552430"
	Mar 14 00:46:45 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:45.513914    2295 apiserver.go:52] "Watching apiserver"
	Mar 14 00:46:45 kubernetes-upgrade-552430 kubelet[2295]: I0314 00:46:45.534368    2295 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:46:48.605096   57006 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18375-4912/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-552430 -n kubernetes-upgrade-552430
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-552430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-552430 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-552430 describe pod storage-provisioner: exit status 1 (76.246924ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-552430 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-552430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-552430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-552430: (1.292369566s)
--- FAIL: TestKubernetesUpgrade (372.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-501107 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-501107 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.375757453s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-501107] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-501107" primary control-plane node in "pause-501107" cluster
	* Updating the running kvm2 "pause-501107" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-501107" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:41:14.018463   49120 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:41:14.018635   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:41:14.018646   49120 out.go:304] Setting ErrFile to fd 2...
	I0314 00:41:14.018660   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:41:14.019077   49120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:41:14.019855   49120 out.go:298] Setting JSON to false
	I0314 00:41:14.021243   49120 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5017,"bootTime":1710371857,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:41:14.021335   49120 start.go:139] virtualization: kvm guest
	I0314 00:41:14.023814   49120 out.go:177] * [pause-501107] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:41:14.025397   49120 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:41:14.025369   49120 notify.go:220] Checking for updates...
	I0314 00:41:14.026813   49120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:41:14.028162   49120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:41:14.029779   49120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:41:14.031082   49120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:41:14.032207   49120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:41:14.034535   49120 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:41:14.035021   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:41:14.035069   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:41:14.055566   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0314 00:41:14.056286   49120 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:41:14.056877   49120 main.go:141] libmachine: Using API Version  1
	I0314 00:41:14.056901   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:41:14.057334   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:41:14.057499   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:14.057792   49120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:41:14.058121   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:41:14.058143   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:41:14.078067   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0314 00:41:14.078522   49120 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:41:14.079047   49120 main.go:141] libmachine: Using API Version  1
	I0314 00:41:14.079067   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:41:14.079388   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:41:14.079658   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:14.123683   49120 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:41:14.125089   49120 start.go:297] selected driver: kvm2
	I0314 00:41:14.125114   49120 start.go:901] validating driver "kvm2" against &{Name:pause-501107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.4 ClusterName:pause-501107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:41:14.125347   49120 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:41:14.125805   49120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:41:14.125891   49120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:41:14.146019   49120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:41:14.147000   49120 cni.go:84] Creating CNI manager for ""
	I0314 00:41:14.147025   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:41:14.147132   49120 start.go:340] cluster config:
	{Name:pause-501107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-501107 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:41:14.147318   49120 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:41:14.149021   49120 out.go:177] * Starting "pause-501107" primary control-plane node in "pause-501107" cluster
	I0314 00:41:14.150152   49120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:41:14.150184   49120 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 00:41:14.150194   49120 cache.go:56] Caching tarball of preloaded images
	I0314 00:41:14.150259   49120 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:41:14.150278   49120 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 00:41:14.150390   49120 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/config.json ...
	I0314 00:41:14.150606   49120 start.go:360] acquireMachinesLock for pause-501107: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:41:38.884445   49120 start.go:364] duration metric: took 24.733810718s to acquireMachinesLock for "pause-501107"
	I0314 00:41:38.884499   49120 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:41:38.884524   49120 fix.go:54] fixHost starting: 
	I0314 00:41:38.884936   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:41:38.884988   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:41:38.905511   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0314 00:41:38.906154   49120 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:41:38.906836   49120 main.go:141] libmachine: Using API Version  1
	I0314 00:41:38.906864   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:41:38.907247   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:41:38.907448   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:38.907604   49120 main.go:141] libmachine: (pause-501107) Calling .GetState
	I0314 00:41:38.909739   49120 fix.go:112] recreateIfNeeded on pause-501107: state=Running err=<nil>
	W0314 00:41:38.909778   49120 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:41:38.911820   49120 out.go:177] * Updating the running kvm2 "pause-501107" VM ...
	I0314 00:41:38.914128   49120 machine.go:94] provisionDockerMachine start ...
	I0314 00:41:38.914161   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:38.914442   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:38.917495   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:38.917913   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:38.917943   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:38.918130   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:38.918601   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:38.918828   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:38.919021   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:38.919220   49120 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:38.919445   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0314 00:41:38.919461   49120 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:41:39.040387   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-501107
	
	I0314 00:41:39.040420   49120 main.go:141] libmachine: (pause-501107) Calling .GetMachineName
	I0314 00:41:39.040649   49120 buildroot.go:166] provisioning hostname "pause-501107"
	I0314 00:41:39.040672   49120 main.go:141] libmachine: (pause-501107) Calling .GetMachineName
	I0314 00:41:39.040845   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:39.043603   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.044077   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.044108   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.044263   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:39.044493   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.044680   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.044820   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:39.044989   49120 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:39.045215   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0314 00:41:39.045237   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-501107 && echo "pause-501107" | sudo tee /etc/hostname
	I0314 00:41:39.181523   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-501107
	
	I0314 00:41:39.181551   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:39.184414   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.184827   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.184856   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.185039   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:39.185258   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.185446   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.185623   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:39.185798   49120 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:39.185972   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0314 00:41:39.185989   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-501107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-501107/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-501107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:41:39.308577   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:41:39.308611   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:41:39.308667   49120 buildroot.go:174] setting up certificates
	I0314 00:41:39.308679   49120 provision.go:84] configureAuth start
	I0314 00:41:39.308697   49120 main.go:141] libmachine: (pause-501107) Calling .GetMachineName
	I0314 00:41:39.309017   49120 main.go:141] libmachine: (pause-501107) Calling .GetIP
	I0314 00:41:39.312140   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.312512   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.312561   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.312843   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:39.316199   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.316647   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.316681   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.316983   49120 provision.go:143] copyHostCerts
	I0314 00:41:39.317046   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:41:39.317056   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:41:39.317120   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:41:39.317255   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:41:39.317265   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:41:39.317292   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:41:39.317356   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:41:39.317363   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:41:39.317381   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:41:39.317440   49120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.pause-501107 san=[127.0.0.1 192.168.39.149 localhost minikube pause-501107]
	I0314 00:41:39.489808   49120 provision.go:177] copyRemoteCerts
	I0314 00:41:39.489864   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:41:39.489903   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:39.492768   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.493115   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.493140   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.493336   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:39.493550   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.493740   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:39.493929   49120 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/pause-501107/id_rsa Username:docker}
	I0314 00:41:39.583227   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:41:39.623804   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0314 00:41:39.662358   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:41:39.694106   49120 provision.go:87] duration metric: took 385.413192ms to configureAuth
	I0314 00:41:39.694134   49120 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:41:39.694329   49120 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:41:39.694410   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:39.697216   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.697783   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:39.697811   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:39.698079   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:39.698295   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.698474   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:39.698634   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:39.698843   49120 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:39.699058   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0314 00:41:39.699082   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:41:47.489196   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:41:47.489229   49120 machine.go:97] duration metric: took 8.575075082s to provisionDockerMachine
	I0314 00:41:47.489243   49120 start.go:293] postStartSetup for "pause-501107" (driver="kvm2")
	I0314 00:41:47.489256   49120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:41:47.489283   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:47.489656   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:41:47.489692   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:47.492411   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.492725   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:47.492755   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.492959   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:47.493139   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:47.493285   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:47.493425   49120 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/pause-501107/id_rsa Username:docker}
	I0314 00:41:47.582543   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:41:47.587143   49120 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:41:47.587162   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:41:47.587221   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:41:47.587291   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:41:47.587372   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:41:47.597551   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:41:47.622520   49120 start.go:296] duration metric: took 133.265961ms for postStartSetup
	I0314 00:41:47.622561   49120 fix.go:56] duration metric: took 8.738055112s for fixHost
	I0314 00:41:47.622584   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:47.625393   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.625773   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:47.625803   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.625963   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:47.626153   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:47.626266   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:47.626366   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:47.626504   49120 main.go:141] libmachine: Using SSH client type: native
	I0314 00:41:47.626691   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0314 00:41:47.626723   49120 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 00:41:47.744125   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710376907.732945588
	
	I0314 00:41:47.744154   49120 fix.go:216] guest clock: 1710376907.732945588
	I0314 00:41:47.744165   49120 fix.go:229] Guest: 2024-03-14 00:41:47.732945588 +0000 UTC Remote: 2024-03-14 00:41:47.622565601 +0000 UTC m=+33.670763049 (delta=110.379987ms)
	I0314 00:41:47.744192   49120 fix.go:200] guest clock delta is within tolerance: 110.379987ms
	I0314 00:41:47.744200   49120 start.go:83] releasing machines lock for "pause-501107", held for 8.859719524s
	I0314 00:41:47.744223   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:47.744512   49120 main.go:141] libmachine: (pause-501107) Calling .GetIP
	I0314 00:41:47.747613   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.748083   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:47.748117   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.748307   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:47.748903   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:47.749110   49120 main.go:141] libmachine: (pause-501107) Calling .DriverName
	I0314 00:41:47.749197   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:41:47.749254   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:47.749452   49120 ssh_runner.go:195] Run: cat /version.json
	I0314 00:41:47.749475   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHHostname
	I0314 00:41:47.752261   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.752656   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:47.752688   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.752973   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:47.753125   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.753159   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:47.753304   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:47.753463   49120 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/pause-501107/id_rsa Username:docker}
	I0314 00:41:47.753663   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:47.753723   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:47.753858   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHPort
	I0314 00:41:47.754020   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHKeyPath
	I0314 00:41:47.754193   49120 main.go:141] libmachine: (pause-501107) Calling .GetSSHUsername
	I0314 00:41:47.754362   49120 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/pause-501107/id_rsa Username:docker}
	I0314 00:41:47.836941   49120 ssh_runner.go:195] Run: systemctl --version
	I0314 00:41:47.874343   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:41:48.042749   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:41:48.050399   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:41:48.050465   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:41:48.060772   49120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 00:41:48.060801   49120 start.go:494] detecting cgroup driver to use...
	I0314 00:41:48.060869   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:41:48.079133   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:41:48.109718   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:41:48.109774   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:41:48.220264   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:41:48.341258   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:41:48.656678   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:41:48.983615   49120 docker.go:233] disabling docker service ...
	I0314 00:41:48.983695   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:41:49.083232   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:41:49.116972   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:41:49.346905   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:41:49.547934   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:41:49.565395   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:41:49.616342   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:41:49.616415   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:49.636167   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:41:49.636253   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:49.656562   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:49.672311   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:41:49.695371   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:41:49.768869   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:41:49.789598   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:41:49.805244   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:41:50.040277   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:41:50.565900   49120 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:41:50.565980   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:41:50.572238   49120 start.go:562] Will wait 60s for crictl version
	I0314 00:41:50.572303   49120 ssh_runner.go:195] Run: which crictl
	I0314 00:41:50.577349   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:41:50.696116   49120 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:41:50.696212   49120 ssh_runner.go:195] Run: crio --version
	I0314 00:41:50.834425   49120 ssh_runner.go:195] Run: crio --version
	I0314 00:41:51.149313   49120 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:41:51.150311   49120 main.go:141] libmachine: (pause-501107) Calling .GetIP
	I0314 00:41:51.153403   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:51.153831   49120 main.go:141] libmachine: (pause-501107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:b7:eb", ip: ""} in network mk-pause-501107: {Iface:virbr1 ExpiryTime:2024-03-14 01:39:48 +0000 UTC Type:0 Mac:52:54:00:b5:b7:eb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:pause-501107 Clientid:01:52:54:00:b5:b7:eb}
	I0314 00:41:51.153863   49120 main.go:141] libmachine: (pause-501107) DBG | domain pause-501107 has defined IP address 192.168.39.149 and MAC address 52:54:00:b5:b7:eb in network mk-pause-501107
	I0314 00:41:51.154127   49120 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:41:51.167475   49120 kubeadm.go:877] updating cluster {Name:pause-501107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-501107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:41:51.167645   49120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:41:51.167703   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:41:51.233185   49120 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:41:51.233217   49120 crio.go:415] Images already preloaded, skipping extraction
	I0314 00:41:51.233278   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:41:51.277611   49120 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:41:51.277633   49120 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:41:51.277641   49120 kubeadm.go:928] updating node { 192.168.39.149 8443 v1.28.4 crio true true} ...
	I0314 00:41:51.277749   49120 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-501107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-501107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:41:51.277868   49120 ssh_runner.go:195] Run: crio config
	I0314 00:41:51.328868   49120 cni.go:84] Creating CNI manager for ""
	I0314 00:41:51.328892   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:41:51.328907   49120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:41:51.328927   49120 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-501107 NodeName:pause-501107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:41:51.329041   49120 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-501107"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:41:51.329096   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:41:51.339769   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:41:51.339833   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:41:51.349669   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0314 00:41:51.367788   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:41:51.385589   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0314 00:41:51.403780   49120 ssh_runner.go:195] Run: grep 192.168.39.149	control-plane.minikube.internal$ /etc/hosts
	I0314 00:41:51.408536   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:41:51.553640   49120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:41:51.570031   49120 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107 for IP: 192.168.39.149
	I0314 00:41:51.570059   49120 certs.go:194] generating shared ca certs ...
	I0314 00:41:51.570079   49120 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:41:51.570232   49120 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:41:51.570267   49120 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:41:51.570276   49120 certs.go:256] generating profile certs ...
	I0314 00:41:51.570360   49120 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/client.key
	I0314 00:41:51.570442   49120 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/apiserver.key.1faf629a
	I0314 00:41:51.570498   49120 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/proxy-client.key
	I0314 00:41:51.570613   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:41:51.570656   49120 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:41:51.570676   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:41:51.570711   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:41:51.571248   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:41:51.571283   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:41:51.571339   49120 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:41:51.571984   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:41:51.605508   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:41:51.642454   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:41:51.675914   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:41:51.704607   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 00:41:51.732243   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:41:51.802883   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:41:51.840294   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/pause-501107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:41:51.875655   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:41:51.903353   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:41:51.936750   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:41:51.964273   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:41:51.984675   49120 ssh_runner.go:195] Run: openssl version
	I0314 00:41:51.990849   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:41:52.002875   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:52.009744   49120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:52.009822   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:41:52.016131   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:41:52.027508   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:41:52.039345   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:41:52.044329   49120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:41:52.044392   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:41:52.052640   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:41:52.067626   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:41:52.081002   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:41:52.087222   49120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:41:52.087343   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:41:52.095532   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:41:52.110007   49120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:41:52.115311   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:41:52.122954   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:41:52.129439   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:41:52.136216   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:41:52.142496   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:41:52.150514   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:41:52.158659   49120 kubeadm.go:391] StartCluster: {Name:pause-501107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:pause-501107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:41:52.158842   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:41:52.158915   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:41:52.214914   49120 cri.go:89] found id: "564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc"
	I0314 00:41:52.214937   49120 cri.go:89] found id: "4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7"
	I0314 00:41:52.214943   49120 cri.go:89] found id: "2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526"
	I0314 00:41:52.214949   49120 cri.go:89] found id: "c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716"
	I0314 00:41:52.214954   49120 cri.go:89] found id: "02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61"
	I0314 00:41:52.214958   49120 cri.go:89] found id: "e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5"
	I0314 00:41:52.214962   49120 cri.go:89] found id: ""
	I0314 00:41:52.215019   49120 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-501107 -n pause-501107
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-501107 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-501107 logs -n 25: (1.441066976s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-326260 sudo              | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo              | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo find         | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo crio         | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-326260                   | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC | 14 Mar 24 00:37 UTC |
	| start   | -p force-systemd-flag-058213       | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC | 14 Mar 24 00:38 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-058213 ssh cat  | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:38 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-058213       | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:38 UTC |
	| start   | -p force-systemd-env-233196        | force-systemd-env-233196  | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:39 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-863544          | running-upgrade-863544    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:40 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-820136             | offline-crio-820136       | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p pause-501107 --memory=2048      | pause-501107              | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:41 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-848457 stop        | minikube                  | jenkins | v1.26.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p stopped-upgrade-848457          | stopped-upgrade-848457    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:40 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-233196        | force-systemd-env-233196  | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p cert-expiration-577166          | cert-expiration-577166    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:41 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-863544          | running-upgrade-863544    | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:40 UTC |
	| start   | -p kubernetes-upgrade-552430       | kubernetes-upgrade-552430 | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-848457          | stopped-upgrade-848457    | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:40 UTC |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:42 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-501107                    | pause-501107              | jenkins | v1.32.0 | 14 Mar 24 00:41 UTC | 14 Mar 24 00:42 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC | 14 Mar 24 00:42 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC | 14 Mar 24 00:42 UTC |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC |                     |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:42:07.124431   49637 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:42:07.124651   49637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:42:07.124654   49637 out.go:304] Setting ErrFile to fd 2...
	I0314 00:42:07.124658   49637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:42:07.124831   49637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:42:07.125386   49637 out.go:298] Setting JSON to false
	I0314 00:42:07.126326   49637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5070,"bootTime":1710371857,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:42:07.126378   49637 start.go:139] virtualization: kvm guest
	I0314 00:42:07.128715   49637 out.go:177] * [NoKubernetes-576005] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:42:07.130332   49637 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:42:07.130377   49637 notify.go:220] Checking for updates...
	I0314 00:42:07.131802   49637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:42:07.133274   49637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:42:07.134682   49637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:07.135993   49637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:42:07.137208   49637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:42:07.138869   49637 config.go:182] Loaded profile config "cert-expiration-577166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:07.139000   49637 config.go:182] Loaded profile config "kubernetes-upgrade-552430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:42:07.139173   49637 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:07.139195   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.139281   49637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:42:07.175907   49637 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:42:07.177262   49637 start.go:297] selected driver: kvm2
	I0314 00:42:07.177269   49637 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:42:07.177278   49637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:42:07.177561   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.177636   49637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:42:07.177711   49637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:42:07.192887   49637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:42:07.192922   49637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:42:07.193357   49637 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0314 00:42:07.193516   49637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:42:07.193563   49637 cni.go:84] Creating CNI manager for ""
	I0314 00:42:07.193570   49637 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:42:07.193576   49637 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 00:42:07.193595   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.193658   49637 start.go:340] cluster config:
	{Name:NoKubernetes-576005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-576005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:42:07.193740   49637 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:42:07.195507   49637 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-576005
	I0314 00:42:07.196860   49637 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0314 00:42:07.618130   49637 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0314 00:42:07.618335   49637 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/NoKubernetes-576005/config.json ...
	I0314 00:42:07.618372   49637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/NoKubernetes-576005/config.json: {Name:mk318b2b739825ccd6a86d1e3a4a2abb62ac6099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:07.618519   49637 start.go:360] acquireMachinesLock for NoKubernetes-576005: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:42:07.618543   49637 start.go:364] duration metric: took 16.627µs to acquireMachinesLock for "NoKubernetes-576005"
	I0314 00:42:07.618552   49637 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-576005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-576005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:42:07.618614   49637 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 00:42:04.248911   49120 pod_ready.go:102] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"False"
	I0314 00:42:06.248608   49120 pod_ready.go:92] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:06.248634   49120 pod_ready.go:81] duration metric: took 6.007820063s for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:06.248647   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:08.257417   49120 pod_ready.go:102] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"False"
	I0314 00:42:04.976856   48503 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:42:04.977648   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:04.977845   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:42:10.256167   49120 pod_ready.go:92] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.256197   49120 pod_ready.go:81] duration metric: took 4.007541297s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.256211   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.261910   49120 pod_ready.go:92] pod "kube-apiserver-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.261936   49120 pod_ready.go:81] duration metric: took 5.716899ms for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.261949   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.779064   49120 pod_ready.go:92] pod "kube-controller-manager-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.779088   49120 pod_ready.go:81] duration metric: took 517.131354ms for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.779098   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.794989   49120 pod_ready.go:92] pod "kube-proxy-rb9kh" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.795012   49120 pod_ready.go:81] duration metric: took 15.908648ms for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.795022   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.800758   49120 pod_ready.go:92] pod "kube-scheduler-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.800787   49120 pod_ready.go:81] duration metric: took 5.757905ms for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.800797   49120 pod_ready.go:38] duration metric: took 10.565383875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:10.800826   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:42:10.815952   49120 ops.go:34] apiserver oom_adj: -16
	I0314 00:42:10.815978   49120 kubeadm.go:591] duration metric: took 18.528593133s to restartPrimaryControlPlane
	I0314 00:42:10.815988   49120 kubeadm.go:393] duration metric: took 18.657338112s to StartCluster
	I0314 00:42:10.816007   49120 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:10.816084   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:42:10.817266   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:10.817520   49120 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:42:10.819193   49120 out.go:177] * Verifying Kubernetes components...
	I0314 00:42:10.817568   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:42:10.817750   49120 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:10.820477   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:42:10.821814   49120 out.go:177] * Enabled addons: 
	I0314 00:42:07.620635   49637 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0314 00:42:07.620839   49637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:42:07.620870   49637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:42:07.635550   49637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0314 00:42:07.635940   49637 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:42:07.636426   49637 main.go:141] libmachine: Using API Version  1
	I0314 00:42:07.636442   49637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:42:07.636757   49637 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:42:07.636936   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .GetMachineName
	I0314 00:42:07.637099   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .DriverName
	I0314 00:42:07.637247   49637 start.go:159] libmachine.API.Create for "NoKubernetes-576005" (driver="kvm2")
	I0314 00:42:07.637272   49637 client.go:168] LocalClient.Create starting
	I0314 00:42:07.637295   49637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0314 00:42:07.637327   49637 main.go:141] libmachine: Decoding PEM data...
	I0314 00:42:07.637339   49637 main.go:141] libmachine: Parsing certificate...
	I0314 00:42:07.637392   49637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0314 00:42:07.637407   49637 main.go:141] libmachine: Decoding PEM data...
	I0314 00:42:07.637414   49637 main.go:141] libmachine: Parsing certificate...
	I0314 00:42:07.637427   49637 main.go:141] libmachine: Running pre-create checks...
	I0314 00:42:07.637432   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .PreCreateCheck
	I0314 00:42:07.637725   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .GetConfigRaw
	I0314 00:42:07.638090   49637 main.go:141] libmachine: Creating machine...
	I0314 00:42:07.638096   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .Create
	I0314 00:42:07.638216   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating KVM machine...
	I0314 00:42:07.639437   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | found existing default KVM network
	I0314 00:42:07.640539   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.640371   49660 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:b9:5c} reservation:<nil>}
	I0314 00:42:07.641315   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.641242   49660 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:db:25:bb} reservation:<nil>}
	I0314 00:42:07.642306   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.642241   49660 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:60:3f:ac} reservation:<nil>}
	I0314 00:42:07.643449   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.643385   49660 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289ab0}
	I0314 00:42:07.643491   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | created network xml: 
	I0314 00:42:07.643499   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | <network>
	I0314 00:42:07.643505   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <name>mk-NoKubernetes-576005</name>
	I0314 00:42:07.643510   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <dns enable='no'/>
	I0314 00:42:07.643514   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   
	I0314 00:42:07.643519   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 00:42:07.643532   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |     <dhcp>
	I0314 00:42:07.643537   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 00:42:07.643541   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |     </dhcp>
	I0314 00:42:07.643555   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   </ip>
	I0314 00:42:07.643562   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   
	I0314 00:42:07.643568   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | </network>
	I0314 00:42:07.643583   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | 
	I0314 00:42:07.648846   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | trying to create private KVM network mk-NoKubernetes-576005 192.168.72.0/24...
	I0314 00:42:07.719203   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | private KVM network mk-NoKubernetes-576005 192.168.72.0/24 created
	I0314 00:42:07.719224   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 ...
	I0314 00:42:07.719241   49637 main.go:141] libmachine: (NoKubernetes-576005) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 00:42:07.719253   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.719198   49660 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:07.719382   49637 main.go:141] libmachine: (NoKubernetes-576005) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 00:42:07.959602   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.959482   49660 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/id_rsa...
	I0314 00:42:08.158783   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:08.158672   49660 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/NoKubernetes-576005.rawdisk...
	I0314 00:42:08.158799   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Writing magic tar header
	I0314 00:42:08.158840   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Writing SSH key tar header
	I0314 00:42:08.158879   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:08.158803   49660 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 ...
	I0314 00:42:08.158903   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005
	I0314 00:42:08.158934   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 (perms=drwx------)
	I0314 00:42:08.158944   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0314 00:42:08.158951   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0314 00:42:08.158960   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0314 00:42:08.158965   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0314 00:42:08.158971   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 00:42:08.158976   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 00:42:08.158983   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating domain...
	I0314 00:42:08.158990   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:08.158995   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0314 00:42:08.159000   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 00:42:08.159004   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins
	I0314 00:42:08.159009   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home
	I0314 00:42:08.159013   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Skipping /home - not owner
	I0314 00:42:08.160139   49637 main.go:141] libmachine: (NoKubernetes-576005) define libvirt domain using xml: 
	I0314 00:42:08.160147   49637 main.go:141] libmachine: (NoKubernetes-576005) <domain type='kvm'>
	I0314 00:42:08.160152   49637 main.go:141] libmachine: (NoKubernetes-576005)   <name>NoKubernetes-576005</name>
	I0314 00:42:08.160156   49637 main.go:141] libmachine: (NoKubernetes-576005)   <memory unit='MiB'>6000</memory>
	I0314 00:42:08.160161   49637 main.go:141] libmachine: (NoKubernetes-576005)   <vcpu>2</vcpu>
	I0314 00:42:08.160164   49637 main.go:141] libmachine: (NoKubernetes-576005)   <features>
	I0314 00:42:08.160170   49637 main.go:141] libmachine: (NoKubernetes-576005)     <acpi/>
	I0314 00:42:08.160173   49637 main.go:141] libmachine: (NoKubernetes-576005)     <apic/>
	I0314 00:42:08.160177   49637 main.go:141] libmachine: (NoKubernetes-576005)     <pae/>
	I0314 00:42:08.160180   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160194   49637 main.go:141] libmachine: (NoKubernetes-576005)   </features>
	I0314 00:42:08.160199   49637 main.go:141] libmachine: (NoKubernetes-576005)   <cpu mode='host-passthrough'>
	I0314 00:42:08.160204   49637 main.go:141] libmachine: (NoKubernetes-576005)   
	I0314 00:42:08.160209   49637 main.go:141] libmachine: (NoKubernetes-576005)   </cpu>
	I0314 00:42:08.160215   49637 main.go:141] libmachine: (NoKubernetes-576005)   <os>
	I0314 00:42:08.160220   49637 main.go:141] libmachine: (NoKubernetes-576005)     <type>hvm</type>
	I0314 00:42:08.160227   49637 main.go:141] libmachine: (NoKubernetes-576005)     <boot dev='cdrom'/>
	I0314 00:42:08.160240   49637 main.go:141] libmachine: (NoKubernetes-576005)     <boot dev='hd'/>
	I0314 00:42:08.160246   49637 main.go:141] libmachine: (NoKubernetes-576005)     <bootmenu enable='no'/>
	I0314 00:42:08.160249   49637 main.go:141] libmachine: (NoKubernetes-576005)   </os>
	I0314 00:42:08.160253   49637 main.go:141] libmachine: (NoKubernetes-576005)   <devices>
	I0314 00:42:08.160257   49637 main.go:141] libmachine: (NoKubernetes-576005)     <disk type='file' device='cdrom'>
	I0314 00:42:08.160267   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/boot2docker.iso'/>
	I0314 00:42:08.160276   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target dev='hdc' bus='scsi'/>
	I0314 00:42:08.160280   49637 main.go:141] libmachine: (NoKubernetes-576005)       <readonly/>
	I0314 00:42:08.160284   49637 main.go:141] libmachine: (NoKubernetes-576005)     </disk>
	I0314 00:42:08.160289   49637 main.go:141] libmachine: (NoKubernetes-576005)     <disk type='file' device='disk'>
	I0314 00:42:08.160294   49637 main.go:141] libmachine: (NoKubernetes-576005)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 00:42:08.160301   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/NoKubernetes-576005.rawdisk'/>
	I0314 00:42:08.160305   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target dev='hda' bus='virtio'/>
	I0314 00:42:08.160309   49637 main.go:141] libmachine: (NoKubernetes-576005)     </disk>
	I0314 00:42:08.160312   49637 main.go:141] libmachine: (NoKubernetes-576005)     <interface type='network'>
	I0314 00:42:08.160320   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source network='mk-NoKubernetes-576005'/>
	I0314 00:42:08.160328   49637 main.go:141] libmachine: (NoKubernetes-576005)       <model type='virtio'/>
	I0314 00:42:08.160333   49637 main.go:141] libmachine: (NoKubernetes-576005)     </interface>
	I0314 00:42:08.160336   49637 main.go:141] libmachine: (NoKubernetes-576005)     <interface type='network'>
	I0314 00:42:08.160341   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source network='default'/>
	I0314 00:42:08.160345   49637 main.go:141] libmachine: (NoKubernetes-576005)       <model type='virtio'/>
	I0314 00:42:08.160349   49637 main.go:141] libmachine: (NoKubernetes-576005)     </interface>
	I0314 00:42:08.160352   49637 main.go:141] libmachine: (NoKubernetes-576005)     <serial type='pty'>
	I0314 00:42:08.160377   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target port='0'/>
	I0314 00:42:08.160389   49637 main.go:141] libmachine: (NoKubernetes-576005)     </serial>
	I0314 00:42:08.160394   49637 main.go:141] libmachine: (NoKubernetes-576005)     <console type='pty'>
	I0314 00:42:08.160401   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target type='serial' port='0'/>
	I0314 00:42:08.160406   49637 main.go:141] libmachine: (NoKubernetes-576005)     </console>
	I0314 00:42:08.160410   49637 main.go:141] libmachine: (NoKubernetes-576005)     <rng model='virtio'>
	I0314 00:42:08.160416   49637 main.go:141] libmachine: (NoKubernetes-576005)       <backend model='random'>/dev/random</backend>
	I0314 00:42:08.160420   49637 main.go:141] libmachine: (NoKubernetes-576005)     </rng>
	I0314 00:42:08.160424   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160427   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160441   49637 main.go:141] libmachine: (NoKubernetes-576005)   </devices>
	I0314 00:42:08.160444   49637 main.go:141] libmachine: (NoKubernetes-576005) </domain>
	I0314 00:42:08.160452   49637 main.go:141] libmachine: (NoKubernetes-576005) 
	I0314 00:42:08.165102   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:20:27:da in network default
	I0314 00:42:08.165705   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring networks are active...
	I0314 00:42:08.165717   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:08.166426   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring network default is active
	I0314 00:42:08.166694   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring network mk-NoKubernetes-576005 is active
	I0314 00:42:08.167169   49637 main.go:141] libmachine: (NoKubernetes-576005) Getting domain xml...
	I0314 00:42:08.167895   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating domain...
	I0314 00:42:09.388674   49637 main.go:141] libmachine: (NoKubernetes-576005) Waiting to get IP...
	I0314 00:42:09.389479   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.389998   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.390037   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.389975   49660 retry.go:31] will retry after 221.324781ms: waiting for machine to come up
	I0314 00:42:09.613399   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.613850   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.613865   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.613808   49660 retry.go:31] will retry after 261.31818ms: waiting for machine to come up
	I0314 00:42:09.876229   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.876750   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.876797   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.876735   49660 retry.go:31] will retry after 333.496586ms: waiting for machine to come up
	I0314 00:42:10.212257   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:10.212700   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:10.212727   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:10.212689   49660 retry.go:31] will retry after 530.508296ms: waiting for machine to come up
	I0314 00:42:10.744916   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:10.745460   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:10.745482   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:10.745388   49660 retry.go:31] will retry after 560.790902ms: waiting for machine to come up
	I0314 00:42:11.308128   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:11.308619   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:11.308649   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:11.308560   49660 retry.go:31] will retry after 791.425652ms: waiting for machine to come up
	I0314 00:42:12.101911   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:12.102442   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:12.102462   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:12.102375   49660 retry.go:31] will retry after 1.13830533s: waiting for machine to come up
	I0314 00:42:10.823224   49120 addons.go:505] duration metric: took 5.657841ms for enable addons: enabled=[]
	I0314 00:42:11.025763   49120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:42:11.041692   49120 node_ready.go:35] waiting up to 6m0s for node "pause-501107" to be "Ready" ...
	I0314 00:42:11.044573   49120 node_ready.go:49] node "pause-501107" has status "Ready":"True"
	I0314 00:42:11.044601   49120 node_ready.go:38] duration metric: took 2.879204ms for node "pause-501107" to be "Ready" ...
	I0314 00:42:11.044608   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:11.057098   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.454422   49120 pod_ready.go:92] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:11.454454   49120 pod_ready.go:81] duration metric: took 397.332452ms for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.454467   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.854039   49120 pod_ready.go:92] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:11.854062   49120 pod_ready.go:81] duration metric: took 399.587177ms for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.854079   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.255078   49120 pod_ready.go:92] pod "kube-apiserver-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:12.255103   49120 pod_ready.go:81] duration metric: took 401.01813ms for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.255113   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.653061   49120 pod_ready.go:92] pod "kube-controller-manager-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:12.653096   49120 pod_ready.go:81] duration metric: took 397.975373ms for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.653108   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.053400   49120 pod_ready.go:92] pod "kube-proxy-rb9kh" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:13.053427   49120 pod_ready.go:81] duration metric: took 400.311039ms for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.053439   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.455577   49120 pod_ready.go:92] pod "kube-scheduler-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:13.455603   49120 pod_ready.go:81] duration metric: took 402.156969ms for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.455612   49120 pod_ready.go:38] duration metric: took 2.410991413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:13.455672   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:42:13.455729   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:42:13.470208   49120 api_server.go:72] duration metric: took 2.652629103s to wait for apiserver process to appear ...
	I0314 00:42:13.470239   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:42:13.470260   49120 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0314 00:42:13.477249   49120 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0314 00:42:13.480606   49120 api_server.go:141] control plane version: v1.28.4
	I0314 00:42:13.480629   49120 api_server.go:131] duration metric: took 10.381847ms to wait for apiserver health ...
	I0314 00:42:13.480640   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:42:13.657944   49120 system_pods.go:59] 6 kube-system pods found
	I0314 00:42:13.657976   49120 system_pods.go:61] "coredns-5dd5756b68-wpvxx" [fbf69bb2-2b46-4c05-8ddc-85b3853135bc] Running
	I0314 00:42:13.657982   49120 system_pods.go:61] "etcd-pause-501107" [4c4fa65f-568a-4b53-94d3-6b8182e159c4] Running
	I0314 00:42:13.657988   49120 system_pods.go:61] "kube-apiserver-pause-501107" [d2f6f361-722e-41c4-9fa3-8a36f06e7a71] Running
	I0314 00:42:13.657994   49120 system_pods.go:61] "kube-controller-manager-pause-501107" [2bd77fd2-8422-4a18-8e66-f8d065209bbb] Running
	I0314 00:42:13.657997   49120 system_pods.go:61] "kube-proxy-rb9kh" [590a1416-591e-4be6-a96a-907165b4bb81] Running
	I0314 00:42:13.658002   49120 system_pods.go:61] "kube-scheduler-pause-501107" [77e15395-49cf-4302-84c1-4f8f0d21cf9f] Running
	I0314 00:42:13.658010   49120 system_pods.go:74] duration metric: took 177.362445ms to wait for pod list to return data ...
	I0314 00:42:13.658026   49120 default_sa.go:34] waiting for default service account to be created ...
	I0314 00:42:13.855214   49120 default_sa.go:45] found service account: "default"
	I0314 00:42:13.855241   49120 default_sa.go:55] duration metric: took 197.205194ms for default service account to be created ...
	I0314 00:42:13.855253   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 00:42:14.057563   49120 system_pods.go:86] 6 kube-system pods found
	I0314 00:42:14.057592   49120 system_pods.go:89] "coredns-5dd5756b68-wpvxx" [fbf69bb2-2b46-4c05-8ddc-85b3853135bc] Running
	I0314 00:42:14.057600   49120 system_pods.go:89] "etcd-pause-501107" [4c4fa65f-568a-4b53-94d3-6b8182e159c4] Running
	I0314 00:42:14.057607   49120 system_pods.go:89] "kube-apiserver-pause-501107" [d2f6f361-722e-41c4-9fa3-8a36f06e7a71] Running
	I0314 00:42:14.057613   49120 system_pods.go:89] "kube-controller-manager-pause-501107" [2bd77fd2-8422-4a18-8e66-f8d065209bbb] Running
	I0314 00:42:14.057629   49120 system_pods.go:89] "kube-proxy-rb9kh" [590a1416-591e-4be6-a96a-907165b4bb81] Running
	I0314 00:42:14.057635   49120 system_pods.go:89] "kube-scheduler-pause-501107" [77e15395-49cf-4302-84c1-4f8f0d21cf9f] Running
	I0314 00:42:14.057643   49120 system_pods.go:126] duration metric: took 202.383041ms to wait for k8s-apps to be running ...
	I0314 00:42:14.057652   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 00:42:14.057705   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:42:14.078695   49120 system_svc.go:56] duration metric: took 21.034303ms WaitForService to wait for kubelet
	I0314 00:42:14.078723   49120 kubeadm.go:576] duration metric: took 3.261145926s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:42:14.078779   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:42:14.254039   49120 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:42:14.254064   49120 node_conditions.go:123] node cpu capacity is 2
	I0314 00:42:14.254075   49120 node_conditions.go:105] duration metric: took 175.289399ms to run NodePressure ...
	I0314 00:42:14.254087   49120 start.go:240] waiting for startup goroutines ...
	I0314 00:42:14.254095   49120 start.go:245] waiting for cluster config update ...
	I0314 00:42:14.254112   49120 start.go:254] writing updated cluster config ...
	I0314 00:42:14.254382   49120 ssh_runner.go:195] Run: rm -f paused
	I0314 00:42:14.302155   49120 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 00:42:14.304313   49120 out.go:177] * Done! kubectl is now configured to use "pause-501107" cluster and "default" namespace by default
	I0314 00:42:09.978145   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:09.978403   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.075746123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b3eb29-4665-4465-bd16-e02858ecc914 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.077235277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf03b38b-de74-4ab5-b296-e48aff914b97 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.080479804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376935080451045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf03b38b-de74-4ab5-b296-e48aff914b97 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.084988750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d65cfbc9-2f72-4f60-924b-67ba2b063374 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.085270388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d65cfbc9-2f72-4f60-924b-67ba2b063374 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.085567899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d65cfbc9-2f72-4f60-924b-67ba2b063374 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.133886027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d1dca15-6fe1-4762-bfd4-c3e3c57983f7 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.134039327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d1dca15-6fe1-4762-bfd4-c3e3c57983f7 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.135699629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dd30c28-7f87-4ca9-b8bf-c48808e2a156 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.136063557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376935136041636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dd30c28-7f87-4ca9-b8bf-c48808e2a156 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.137429328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=797c0d95-65ae-4213-b749-cc3335991451 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.137528323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=797c0d95-65ae-4213-b749-cc3335991451 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.137869288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=797c0d95-65ae-4213-b749-cc3335991451 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.138848018Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=78b29838-ee68-4bec-86ff-d492a1184751 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.139259609Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wpvxx,Uid:fbf69bb2-2b46-4c05-8ddc-85b3853135bc,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910833701275,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:32.301873448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-501107,Uid:90833fcc9599b44aa6218e4f9b67bc85,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910789822332,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90833fcc9599b44aa6218e4f9b67bc85,kubernetes.io/config.seen: 2024-03-14T00:40:18.608651308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-501107,Uid:97e90dbb56f55f08b36551e3e1ee98f1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910772636763,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56f55
f08b36551e3e1ee98f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97e90dbb56f55f08b36551e3e1ee98f1,kubernetes.io/config.seen: 2024-03-14T00:40:18.608652076Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-501107,Uid:eecb822b291100c874f48db4490230ec,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910751842791,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f48db4490230ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: eecb822b291100c874f48db4490230ec,kubernetes.io/config.seen: 2024-03-14T00:40:18.608650233Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&PodSandboxMetadata{Name:kube-proxy-rb9kh,Uid:590a1416-591e-4be6-a96a-907165b4bb81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910639748824,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:31.885183245Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&PodSandboxMetadata{Name:etcd-pause-501107,Uid:a9bb41914dbba8a07ea51e5de653db74,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910616691572,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a9bb41914dbba8a07ea51e5de653db74,kubernetes.io/config.seen: 2024-03-14T00:40:18.608646607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wpvxx,Uid:fbf69bb2-2b46-4c05-8ddc-85b3853135bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908311078279,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-03-14T00:40:32.301873448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-rb9kh,Uid:590a1416-591e-4be6-a96a-907165b4bb81,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908284170179,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:31.885183245Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&PodSandboxMetadata{Name:etcd-pause-501107,Uid:a9bb41914dbba8a07ea51e5de653db74,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:171
0376908276332122,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a9bb41914dbba8a07ea51e5de653db74,kubernetes.io/config.seen: 2024-03-14T00:40:18.608646607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-501107,Uid:90833fcc9599b44aa6218e4f9b67bc85,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908271850161,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 90833fcc9599b44aa6218e4f9b67bc85,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90833fcc9599b44aa6218e4f9b67bc85,kubernetes.io/config.seen: 2024-03-14T00:40:18.608651308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-501107,Uid:97e90dbb56f55f08b36551e3e1ee98f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908192411367,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97e90dbb56f55f08b36551e3e1ee98f1,kubernetes.io/config.seen: 2024-03-14T00:40:18.608652076Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40e450041062143a09ac261eb
7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-501107,Uid:eecb822b291100c874f48db4490230ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908126549022,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f48db4490230ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: eecb822b291100c874f48db4490230ec,kubernetes.io/config.seen: 2024-03-14T00:40:18.608650233Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=78b29838-ee68-4bec-86ff-d492a1184751 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.139980508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66841d58-ff14-475f-adba-32b978b44aa2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.140030495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66841d58-ff14-475f-adba-32b978b44aa2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.140400766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66841d58-ff14-475f-adba-32b978b44aa2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.190546401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af6b1e7e-644a-4013-8492-12efd5fc6964 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.190662261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af6b1e7e-644a-4013-8492-12efd5fc6964 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.192110402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1f2ef4a-e757-4b7f-b826-f2b1297f5793 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.192790565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376935192765278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1f2ef4a-e757-4b7f-b826-f2b1297f5793 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.193858404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3165b9aa-f714-43fe-88f4-18d06454e2ba name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.193933745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3165b9aa-f714-43fe-88f4-18d06454e2ba name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:15 pause-501107 crio[2766]: time="2024-03-14 00:42:15.194495627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3165b9aa-f714-43fe-88f4-18d06454e2ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f362b6b9c3885       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 seconds ago      Running             kube-proxy                2                   39685797b16c7       kube-proxy-rb9kh
	f4e00189b9412       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago      Running             coredns                   2                   7b3954bc98951       coredns-5dd5756b68-wpvxx
	4ff8ec8d3d3cb       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   20 seconds ago      Running             kube-apiserver            2                   246ac438d9e6a       kube-apiserver-pause-501107
	9618edc80e824       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   20 seconds ago      Running             kube-scheduler            2                   0e4fa687e4a3c       kube-scheduler-pause-501107
	12760e213c97f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   20 seconds ago      Running             kube-controller-manager   2                   ab2de5f802378       kube-controller-manager-pause-501107
	8cba6feac71d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   20 seconds ago      Running             etcd                      2                   fffb7517675f5       etcd-pause-501107
	564c6d1c26294       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   25 seconds ago      Exited              coredns                   1                   a5c729b746c57       coredns-5dd5756b68-wpvxx
	4adb0ce5b1da8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   26 seconds ago      Exited              kube-proxy                1                   9b9a0114991f5       kube-proxy-rb9kh
	2db445d273788       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago      Exited              etcd                      1                   a505210a89e6e       etcd-pause-501107
	c9dcbb0520559       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   26 seconds ago      Exited              kube-controller-manager   1                   e30ba0537bb2a       kube-controller-manager-pause-501107
	02725e3ead705       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   26 seconds ago      Exited              kube-scheduler            1                   1ef1204dc0777       kube-scheduler-pause-501107
	e5a9980abc19f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   26 seconds ago      Exited              kube-apiserver            1                   40e4500410621       kube-apiserver-pause-501107
	
	
	==> coredns [564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc] <==
	
	
	==> coredns [f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35216 - 29426 "HINFO IN 5898812746197343338.95170553043417064. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011543277s
	
	
	==> describe nodes <==
	Name:               pause-501107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-501107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=pause-501107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_40_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-501107
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:42:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    pause-501107
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 18435ec4adf14ebe95268b7da54269e5
	  System UUID:                18435ec4-adf1-4ebe-9526-8b7da54269e5
	  Boot ID:                    8818f262-fa38-48c0-8dc2-cf35cc50bad3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-wpvxx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     103s
	  kube-system                 etcd-pause-501107                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-pause-501107             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-pause-501107    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-rb9kh                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-501107             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeReady                116s                 kubelet          Node pause-501107 status is now: NodeReady
	  Normal  RegisteredNode           104s                 node-controller  Node pause-501107 event: Registered Node pause-501107 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)    kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)    kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)    kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-501107 event: Registered Node pause-501107 in Controller
	
	
	==> dmesg <==
	[  +0.058485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057593] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.181979] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.146500] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.262491] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +5.162961] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +0.064168] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.400962] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.600349] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.663670] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.090602] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.894060] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +0.062965] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.624783] kauditd_printk_skb: 78 callbacks suppressed
	[Mar14 00:41] systemd-fstab-generator[2373]: Ignoring "noauto" option for root device
	[  +0.332233] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.387454] systemd-fstab-generator[2609]: Ignoring "noauto" option for root device
	[  +0.210136] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.462537] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[  +1.573944] systemd-fstab-generator[3241]: Ignoring "noauto" option for root device
	[  +2.141169] systemd-fstab-generator[3366]: Ignoring "noauto" option for root device
	[  +0.081190] kauditd_printk_skb: 236 callbacks suppressed
	[  +5.592395] kauditd_printk_skb: 38 callbacks suppressed
	[Mar14 00:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.165336] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	
	
	==> etcd [2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526] <==
	{"level":"info","ts":"2024-03-14T00:41:49.666282Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"50.88387ms"}
	{"level":"info","ts":"2024-03-14T00:41:49.730719Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-14T00:41:49.822778Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","commit-index":474}
	{"level":"info","ts":"2024-03-14T00:41:49.822982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-14T00:41:49.823066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became follower at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:49.823111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ba3e3e863cacc4d [peers: [], term: 2, commit: 474, applied: 0, lastindex: 474, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-14T00:41:49.837527Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-14T00:41:49.869316Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":450}
	{"level":"info","ts":"2024-03-14T00:41:49.923504Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-14T00:41:49.978545Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ba3e3e863cacc4d","timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:41:49.978856Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ba3e3e863cacc4d"}
	{"level":"info","ts":"2024-03-14T00:41:49.978934Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ba3e3e863cacc4d","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-14T00:41:50.001034Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:41:50.009314Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-14T00:41:50.009522Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009586Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009599Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-03-14T00:41:50.010057Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-03-14T00:41:50.010284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:50.010344Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:50.037401Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:50.037464Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:50.037734Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:41:50.037803Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9] <==
	{"level":"info","ts":"2024-03-14T00:41:54.916517Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:54.916528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:54.916788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-03-14T00:41:54.916895Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-03-14T00:41:54.91704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:54.917066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:54.934228Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:54.934326Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:54.930021Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:41:54.936492Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:41:54.936555Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T00:41:56.5681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba3e3e863cacc4d elected leader ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.570501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:41:56.572233Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ba3e3e863cacc4d","local-member-attributes":"{Name:pause-501107 ClientURLs:[https://192.168.39.149:2379]}","request-path":"/0/members/ba3e3e863cacc4d/attributes","cluster-id":"65f5490397676253","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:41:56.572617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:41:56.572755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.149:2379"}
	{"level":"info","ts":"2024-03-14T00:41:56.573598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:41:56.573824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:41:56.573868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:42:15 up 2 min,  0 users,  load average: 0.28, 0.13, 0.05
	Linux pause-501107 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7] <==
	I0314 00:41:58.075423       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 00:41:58.109251       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 00:41:58.109287       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 00:41:58.183682       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 00:41:58.183816       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 00:41:58.183838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 00:41:58.192490       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 00:41:58.197513       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 00:41:58.209331       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 00:41:58.209860       1 aggregator.go:166] initial CRD sync complete...
	I0314 00:41:58.209911       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 00:41:58.209919       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 00:41:58.209925       1 cache.go:39] Caches are synced for autoregister controller
	E0314 00:41:58.228894       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0314 00:41:58.244385       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 00:41:58.252662       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 00:41:58.252694       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 00:41:59.050037       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 00:42:00.062300       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 00:42:00.084227       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 00:42:00.147411       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 00:42:00.190415       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 00:42:00.198388       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 00:42:10.559741       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 00:42:10.712009       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5] <==
	I0314 00:41:49.256481       1 options.go:220] external host was not specified, using 192.168.39.149
	I0314 00:41:49.296499       1 server.go:148] Version: v1.28.4
	I0314 00:41:49.296596       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286] <==
	I0314 00:42:10.565738       1 range_allocator.go:174] "Sending events to api server"
	I0314 00:42:10.565785       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 00:42:10.565810       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 00:42:10.565833       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 00:42:10.568040       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 00:42:10.568315       1 shared_informer.go:318] Caches are synced for expand
	I0314 00:42:10.573387       1 shared_informer.go:318] Caches are synced for deployment
	I0314 00:42:10.574081       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 00:42:10.574341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="152.069µs"
	I0314 00:42:10.576939       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 00:42:10.578168       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 00:42:10.579423       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 00:42:10.580735       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 00:42:10.582094       1 shared_informer.go:318] Caches are synced for GC
	I0314 00:42:10.587512       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 00:42:10.589878       1 shared_informer.go:318] Caches are synced for HPA
	I0314 00:42:10.595203       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 00:42:10.597729       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 00:42:10.601024       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 00:42:10.649466       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 00:42:10.678722       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 00:42:10.750629       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 00:42:11.106885       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 00:42:11.107029       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 00:42:11.109203       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716] <==
	
	
	==> kube-proxy [4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7] <==
	
	
	==> kube-proxy [f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e] <==
	I0314 00:41:59.427996       1 server_others.go:69] "Using iptables proxy"
	I0314 00:41:59.438598       1 node.go:141] Successfully retrieved node IP: 192.168.39.149
	I0314 00:41:59.480549       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:41:59.480635       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:41:59.483093       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:41:59.483268       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:41:59.483575       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:41:59.483602       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:41:59.485249       1 config.go:188] "Starting service config controller"
	I0314 00:41:59.486313       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:41:59.486430       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:41:59.486451       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:41:59.487654       1 config.go:315] "Starting node config controller"
	I0314 00:41:59.487693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:41:59.586850       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:41:59.586863       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:41:59.588365       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61] <==
	
	
	==> kube-scheduler [9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680] <==
	I0314 00:41:55.783884       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:41:58.122680       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:41:58.122745       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:41:58.122756       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:41:58.122763       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:41:58.201828       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:41:58.201877       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:41:58.208184       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:41:58.208241       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:41:58.211901       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:41:58.212041       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:41:58.309262       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.491715    3373 scope.go:117] "RemoveContainer" containerID="e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.495036    3373 scope.go:117] "RemoveContainer" containerID="c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.496242    3373 scope.go:117] "RemoveContainer" containerID="02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.578817    3373 kubelet_node_status.go:70] "Attempting to register node" node="pause-501107"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.580694    3373 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.149:8443: connect: connection refused" node="pause-501107"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.707036    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-501107&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.707105    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-501107&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.707524    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.707785    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.784613    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.784696    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:55 pause-501107 kubelet[3373]: I0314 00:41:55.382039    3373 kubelet_node_status.go:70] "Attempting to register node" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.224861    3373 kubelet_node_status.go:108] "Node was previously registered" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.224998    3373 kubelet_node_status.go:73] "Successfully registered node" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.232516    3373 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.234003    3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.847451    3373 apiserver.go:52] "Watching apiserver"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.850667    3373 topology_manager.go:215] "Topology Admit Handler" podUID="590a1416-591e-4be6-a96a-907165b4bb81" podNamespace="kube-system" podName="kube-proxy-rb9kh"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.850893    3373 topology_manager.go:215] "Topology Admit Handler" podUID="fbf69bb2-2b46-4c05-8ddc-85b3853135bc" podNamespace="kube-system" podName="coredns-5dd5756b68-wpvxx"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.865544    3373 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.951095    3373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/590a1416-591e-4be6-a96a-907165b4bb81-xtables-lock\") pod \"kube-proxy-rb9kh\" (UID: \"590a1416-591e-4be6-a96a-907165b4bb81\") " pod="kube-system/kube-proxy-rb9kh"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.951362    3373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/590a1416-591e-4be6-a96a-907165b4bb81-lib-modules\") pod \"kube-proxy-rb9kh\" (UID: \"590a1416-591e-4be6-a96a-907165b4bb81\") " pod="kube-system/kube-proxy-rb9kh"
	Mar 14 00:41:59 pause-501107 kubelet[3373]: I0314 00:41:59.151968    3373 scope.go:117] "RemoveContainer" containerID="564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc"
	Mar 14 00:41:59 pause-501107 kubelet[3373]: I0314 00:41:59.152581    3373 scope.go:117] "RemoveContainer" containerID="4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7"
	Mar 14 00:42:06 pause-501107 kubelet[3373]: I0314 00:42:06.160532    3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-501107 -n pause-501107
helpers_test.go:261: (dbg) Run:  kubectl --context pause-501107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-501107 -n pause-501107
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-501107 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-501107 logs -n 25: (1.525605495s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-326260 sudo              | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo              | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo find         | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-326260 sudo crio         | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-326260                   | cilium-326260             | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC | 14 Mar 24 00:37 UTC |
	| start   | -p force-systemd-flag-058213       | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:37 UTC | 14 Mar 24 00:38 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-058213 ssh cat  | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:38 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-058213       | force-systemd-flag-058213 | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:38 UTC |
	| start   | -p force-systemd-env-233196        | force-systemd-env-233196  | jenkins | v1.32.0 | 14 Mar 24 00:38 UTC | 14 Mar 24 00:39 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-863544          | running-upgrade-863544    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:40 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-820136             | offline-crio-820136       | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p pause-501107 --memory=2048      | pause-501107              | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:41 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-848457 stop        | minikube                  | jenkins | v1.26.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p stopped-upgrade-848457          | stopped-upgrade-848457    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:40 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-233196        | force-systemd-env-233196  | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:39 UTC |
	| start   | -p cert-expiration-577166          | cert-expiration-577166    | jenkins | v1.32.0 | 14 Mar 24 00:39 UTC | 14 Mar 24 00:41 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-863544          | running-upgrade-863544    | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:40 UTC |
	| start   | -p kubernetes-upgrade-552430       | kubernetes-upgrade-552430 | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-848457          | stopped-upgrade-848457    | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:40 UTC |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:40 UTC | 14 Mar 24 00:42 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-501107                    | pause-501107              | jenkins | v1.32.0 | 14 Mar 24 00:41 UTC | 14 Mar 24 00:42 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC | 14 Mar 24 00:42 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC | 14 Mar 24 00:42 UTC |
	| start   | -p NoKubernetes-576005             | NoKubernetes-576005       | jenkins | v1.32.0 | 14 Mar 24 00:42 UTC |                     |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:42:07.124431   49637 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:42:07.124651   49637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:42:07.124654   49637 out.go:304] Setting ErrFile to fd 2...
	I0314 00:42:07.124658   49637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:42:07.124831   49637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:42:07.125386   49637 out.go:298] Setting JSON to false
	I0314 00:42:07.126326   49637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5070,"bootTime":1710371857,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:42:07.126378   49637 start.go:139] virtualization: kvm guest
	I0314 00:42:07.128715   49637 out.go:177] * [NoKubernetes-576005] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:42:07.130332   49637 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:42:07.130377   49637 notify.go:220] Checking for updates...
	I0314 00:42:07.131802   49637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:42:07.133274   49637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:42:07.134682   49637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:07.135993   49637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:42:07.137208   49637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:42:07.138869   49637 config.go:182] Loaded profile config "cert-expiration-577166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:07.139000   49637 config.go:182] Loaded profile config "kubernetes-upgrade-552430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:42:07.139173   49637 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:07.139195   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.139281   49637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:42:07.175907   49637 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:42:07.177262   49637 start.go:297] selected driver: kvm2
	I0314 00:42:07.177269   49637 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:42:07.177278   49637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:42:07.177561   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.177636   49637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:42:07.177711   49637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:42:07.192887   49637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:42:07.192922   49637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:42:07.193357   49637 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0314 00:42:07.193516   49637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:42:07.193563   49637 cni.go:84] Creating CNI manager for ""
	I0314 00:42:07.193570   49637 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:42:07.193576   49637 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 00:42:07.193595   49637 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0314 00:42:07.193658   49637 start.go:340] cluster config:
	{Name:NoKubernetes-576005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-576005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:42:07.193740   49637 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:42:07.195507   49637 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-576005
	I0314 00:42:07.196860   49637 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0314 00:42:07.618130   49637 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0314 00:42:07.618335   49637 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/NoKubernetes-576005/config.json ...
	I0314 00:42:07.618372   49637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/NoKubernetes-576005/config.json: {Name:mk318b2b739825ccd6a86d1e3a4a2abb62ac6099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:07.618519   49637 start.go:360] acquireMachinesLock for NoKubernetes-576005: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:42:07.618543   49637 start.go:364] duration metric: took 16.627µs to acquireMachinesLock for "NoKubernetes-576005"
	I0314 00:42:07.618552   49637 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-576005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-576005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:42:07.618614   49637 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 00:42:04.248911   49120 pod_ready.go:102] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"False"
	I0314 00:42:06.248608   49120 pod_ready.go:92] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:06.248634   49120 pod_ready.go:81] duration metric: took 6.007820063s for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:06.248647   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:08.257417   49120 pod_ready.go:102] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"False"
	I0314 00:42:04.976856   48503 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:42:04.977648   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:04.977845   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:42:10.256167   49120 pod_ready.go:92] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.256197   49120 pod_ready.go:81] duration metric: took 4.007541297s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.256211   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.261910   49120 pod_ready.go:92] pod "kube-apiserver-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.261936   49120 pod_ready.go:81] duration metric: took 5.716899ms for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.261949   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.779064   49120 pod_ready.go:92] pod "kube-controller-manager-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.779088   49120 pod_ready.go:81] duration metric: took 517.131354ms for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.779098   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.794989   49120 pod_ready.go:92] pod "kube-proxy-rb9kh" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.795012   49120 pod_ready.go:81] duration metric: took 15.908648ms for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.795022   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.800758   49120 pod_ready.go:92] pod "kube-scheduler-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:10.800787   49120 pod_ready.go:81] duration metric: took 5.757905ms for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:10.800797   49120 pod_ready.go:38] duration metric: took 10.565383875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:10.800826   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:42:10.815952   49120 ops.go:34] apiserver oom_adj: -16
	I0314 00:42:10.815978   49120 kubeadm.go:591] duration metric: took 18.528593133s to restartPrimaryControlPlane
	I0314 00:42:10.815988   49120 kubeadm.go:393] duration metric: took 18.657338112s to StartCluster
	I0314 00:42:10.816007   49120 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:10.816084   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:42:10.817266   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:42:10.817520   49120 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:42:10.819193   49120 out.go:177] * Verifying Kubernetes components...
	I0314 00:42:10.817568   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:42:10.817750   49120 config.go:182] Loaded profile config "pause-501107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:42:10.820477   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:42:10.821814   49120 out.go:177] * Enabled addons: 
	I0314 00:42:07.620635   49637 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0314 00:42:07.620839   49637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:42:07.620870   49637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:42:07.635550   49637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0314 00:42:07.635940   49637 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:42:07.636426   49637 main.go:141] libmachine: Using API Version  1
	I0314 00:42:07.636442   49637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:42:07.636757   49637 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:42:07.636936   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .GetMachineName
	I0314 00:42:07.637099   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .DriverName
	I0314 00:42:07.637247   49637 start.go:159] libmachine.API.Create for "NoKubernetes-576005" (driver="kvm2")
	I0314 00:42:07.637272   49637 client.go:168] LocalClient.Create starting
	I0314 00:42:07.637295   49637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0314 00:42:07.637327   49637 main.go:141] libmachine: Decoding PEM data...
	I0314 00:42:07.637339   49637 main.go:141] libmachine: Parsing certificate...
	I0314 00:42:07.637392   49637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0314 00:42:07.637407   49637 main.go:141] libmachine: Decoding PEM data...
	I0314 00:42:07.637414   49637 main.go:141] libmachine: Parsing certificate...
	I0314 00:42:07.637427   49637 main.go:141] libmachine: Running pre-create checks...
	I0314 00:42:07.637432   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .PreCreateCheck
	I0314 00:42:07.637725   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .GetConfigRaw
	I0314 00:42:07.638090   49637 main.go:141] libmachine: Creating machine...
	I0314 00:42:07.638096   49637 main.go:141] libmachine: (NoKubernetes-576005) Calling .Create
	I0314 00:42:07.638216   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating KVM machine...
	I0314 00:42:07.639437   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | found existing default KVM network
	I0314 00:42:07.640539   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.640371   49660 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:b9:5c} reservation:<nil>}
	I0314 00:42:07.641315   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.641242   49660 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:db:25:bb} reservation:<nil>}
	I0314 00:42:07.642306   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.642241   49660 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:60:3f:ac} reservation:<nil>}
	I0314 00:42:07.643449   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.643385   49660 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289ab0}
	I0314 00:42:07.643491   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | created network xml: 
	I0314 00:42:07.643499   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | <network>
	I0314 00:42:07.643505   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <name>mk-NoKubernetes-576005</name>
	I0314 00:42:07.643510   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <dns enable='no'/>
	I0314 00:42:07.643514   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   
	I0314 00:42:07.643519   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 00:42:07.643532   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |     <dhcp>
	I0314 00:42:07.643537   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 00:42:07.643541   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |     </dhcp>
	I0314 00:42:07.643555   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   </ip>
	I0314 00:42:07.643562   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG |   
	I0314 00:42:07.643568   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | </network>
	I0314 00:42:07.643583   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | 
	I0314 00:42:07.648846   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | trying to create private KVM network mk-NoKubernetes-576005 192.168.72.0/24...
	I0314 00:42:07.719203   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | private KVM network mk-NoKubernetes-576005 192.168.72.0/24 created
	I0314 00:42:07.719224   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 ...
	I0314 00:42:07.719241   49637 main.go:141] libmachine: (NoKubernetes-576005) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 00:42:07.719253   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.719198   49660 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:07.719382   49637 main.go:141] libmachine: (NoKubernetes-576005) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 00:42:07.959602   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:07.959482   49660 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/id_rsa...
	I0314 00:42:08.158783   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:08.158672   49660 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/NoKubernetes-576005.rawdisk...
	I0314 00:42:08.158799   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Writing magic tar header
	I0314 00:42:08.158840   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Writing SSH key tar header
	I0314 00:42:08.158879   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:08.158803   49660 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 ...
	I0314 00:42:08.158903   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005
	I0314 00:42:08.158934   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005 (perms=drwx------)
	I0314 00:42:08.158944   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0314 00:42:08.158951   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0314 00:42:08.158960   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0314 00:42:08.158965   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0314 00:42:08.158971   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 00:42:08.158976   49637 main.go:141] libmachine: (NoKubernetes-576005) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 00:42:08.158983   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating domain...
	I0314 00:42:08.158990   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:42:08.158995   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0314 00:42:08.159000   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 00:42:08.159004   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home/jenkins
	I0314 00:42:08.159009   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Checking permissions on dir: /home
	I0314 00:42:08.159013   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | Skipping /home - not owner
	I0314 00:42:08.160139   49637 main.go:141] libmachine: (NoKubernetes-576005) define libvirt domain using xml: 
	I0314 00:42:08.160147   49637 main.go:141] libmachine: (NoKubernetes-576005) <domain type='kvm'>
	I0314 00:42:08.160152   49637 main.go:141] libmachine: (NoKubernetes-576005)   <name>NoKubernetes-576005</name>
	I0314 00:42:08.160156   49637 main.go:141] libmachine: (NoKubernetes-576005)   <memory unit='MiB'>6000</memory>
	I0314 00:42:08.160161   49637 main.go:141] libmachine: (NoKubernetes-576005)   <vcpu>2</vcpu>
	I0314 00:42:08.160164   49637 main.go:141] libmachine: (NoKubernetes-576005)   <features>
	I0314 00:42:08.160170   49637 main.go:141] libmachine: (NoKubernetes-576005)     <acpi/>
	I0314 00:42:08.160173   49637 main.go:141] libmachine: (NoKubernetes-576005)     <apic/>
	I0314 00:42:08.160177   49637 main.go:141] libmachine: (NoKubernetes-576005)     <pae/>
	I0314 00:42:08.160180   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160194   49637 main.go:141] libmachine: (NoKubernetes-576005)   </features>
	I0314 00:42:08.160199   49637 main.go:141] libmachine: (NoKubernetes-576005)   <cpu mode='host-passthrough'>
	I0314 00:42:08.160204   49637 main.go:141] libmachine: (NoKubernetes-576005)   
	I0314 00:42:08.160209   49637 main.go:141] libmachine: (NoKubernetes-576005)   </cpu>
	I0314 00:42:08.160215   49637 main.go:141] libmachine: (NoKubernetes-576005)   <os>
	I0314 00:42:08.160220   49637 main.go:141] libmachine: (NoKubernetes-576005)     <type>hvm</type>
	I0314 00:42:08.160227   49637 main.go:141] libmachine: (NoKubernetes-576005)     <boot dev='cdrom'/>
	I0314 00:42:08.160240   49637 main.go:141] libmachine: (NoKubernetes-576005)     <boot dev='hd'/>
	I0314 00:42:08.160246   49637 main.go:141] libmachine: (NoKubernetes-576005)     <bootmenu enable='no'/>
	I0314 00:42:08.160249   49637 main.go:141] libmachine: (NoKubernetes-576005)   </os>
	I0314 00:42:08.160253   49637 main.go:141] libmachine: (NoKubernetes-576005)   <devices>
	I0314 00:42:08.160257   49637 main.go:141] libmachine: (NoKubernetes-576005)     <disk type='file' device='cdrom'>
	I0314 00:42:08.160267   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/boot2docker.iso'/>
	I0314 00:42:08.160276   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target dev='hdc' bus='scsi'/>
	I0314 00:42:08.160280   49637 main.go:141] libmachine: (NoKubernetes-576005)       <readonly/>
	I0314 00:42:08.160284   49637 main.go:141] libmachine: (NoKubernetes-576005)     </disk>
	I0314 00:42:08.160289   49637 main.go:141] libmachine: (NoKubernetes-576005)     <disk type='file' device='disk'>
	I0314 00:42:08.160294   49637 main.go:141] libmachine: (NoKubernetes-576005)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 00:42:08.160301   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/NoKubernetes-576005/NoKubernetes-576005.rawdisk'/>
	I0314 00:42:08.160305   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target dev='hda' bus='virtio'/>
	I0314 00:42:08.160309   49637 main.go:141] libmachine: (NoKubernetes-576005)     </disk>
	I0314 00:42:08.160312   49637 main.go:141] libmachine: (NoKubernetes-576005)     <interface type='network'>
	I0314 00:42:08.160320   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source network='mk-NoKubernetes-576005'/>
	I0314 00:42:08.160328   49637 main.go:141] libmachine: (NoKubernetes-576005)       <model type='virtio'/>
	I0314 00:42:08.160333   49637 main.go:141] libmachine: (NoKubernetes-576005)     </interface>
	I0314 00:42:08.160336   49637 main.go:141] libmachine: (NoKubernetes-576005)     <interface type='network'>
	I0314 00:42:08.160341   49637 main.go:141] libmachine: (NoKubernetes-576005)       <source network='default'/>
	I0314 00:42:08.160345   49637 main.go:141] libmachine: (NoKubernetes-576005)       <model type='virtio'/>
	I0314 00:42:08.160349   49637 main.go:141] libmachine: (NoKubernetes-576005)     </interface>
	I0314 00:42:08.160352   49637 main.go:141] libmachine: (NoKubernetes-576005)     <serial type='pty'>
	I0314 00:42:08.160377   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target port='0'/>
	I0314 00:42:08.160389   49637 main.go:141] libmachine: (NoKubernetes-576005)     </serial>
	I0314 00:42:08.160394   49637 main.go:141] libmachine: (NoKubernetes-576005)     <console type='pty'>
	I0314 00:42:08.160401   49637 main.go:141] libmachine: (NoKubernetes-576005)       <target type='serial' port='0'/>
	I0314 00:42:08.160406   49637 main.go:141] libmachine: (NoKubernetes-576005)     </console>
	I0314 00:42:08.160410   49637 main.go:141] libmachine: (NoKubernetes-576005)     <rng model='virtio'>
	I0314 00:42:08.160416   49637 main.go:141] libmachine: (NoKubernetes-576005)       <backend model='random'>/dev/random</backend>
	I0314 00:42:08.160420   49637 main.go:141] libmachine: (NoKubernetes-576005)     </rng>
	I0314 00:42:08.160424   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160427   49637 main.go:141] libmachine: (NoKubernetes-576005)     
	I0314 00:42:08.160441   49637 main.go:141] libmachine: (NoKubernetes-576005)   </devices>
	I0314 00:42:08.160444   49637 main.go:141] libmachine: (NoKubernetes-576005) </domain>
	I0314 00:42:08.160452   49637 main.go:141] libmachine: (NoKubernetes-576005) 
	I0314 00:42:08.165102   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:20:27:da in network default
	I0314 00:42:08.165705   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring networks are active...
	I0314 00:42:08.165717   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:08.166426   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring network default is active
	I0314 00:42:08.166694   49637 main.go:141] libmachine: (NoKubernetes-576005) Ensuring network mk-NoKubernetes-576005 is active
	I0314 00:42:08.167169   49637 main.go:141] libmachine: (NoKubernetes-576005) Getting domain xml...
	I0314 00:42:08.167895   49637 main.go:141] libmachine: (NoKubernetes-576005) Creating domain...
	I0314 00:42:09.388674   49637 main.go:141] libmachine: (NoKubernetes-576005) Waiting to get IP...
	I0314 00:42:09.389479   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.389998   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.390037   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.389975   49660 retry.go:31] will retry after 221.324781ms: waiting for machine to come up
	I0314 00:42:09.613399   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.613850   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.613865   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.613808   49660 retry.go:31] will retry after 261.31818ms: waiting for machine to come up
	I0314 00:42:09.876229   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:09.876750   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:09.876797   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:09.876735   49660 retry.go:31] will retry after 333.496586ms: waiting for machine to come up
	I0314 00:42:10.212257   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:10.212700   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:10.212727   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:10.212689   49660 retry.go:31] will retry after 530.508296ms: waiting for machine to come up
	I0314 00:42:10.744916   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:10.745460   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:10.745482   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:10.745388   49660 retry.go:31] will retry after 560.790902ms: waiting for machine to come up
	I0314 00:42:11.308128   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:11.308619   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:11.308649   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:11.308560   49660 retry.go:31] will retry after 791.425652ms: waiting for machine to come up
	I0314 00:42:12.101911   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | domain NoKubernetes-576005 has defined MAC address 52:54:00:b6:cb:c4 in network mk-NoKubernetes-576005
	I0314 00:42:12.102442   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | unable to find current IP address of domain NoKubernetes-576005 in network mk-NoKubernetes-576005
	I0314 00:42:12.102462   49637 main.go:141] libmachine: (NoKubernetes-576005) DBG | I0314 00:42:12.102375   49660 retry.go:31] will retry after 1.13830533s: waiting for machine to come up
	I0314 00:42:10.823224   49120 addons.go:505] duration metric: took 5.657841ms for enable addons: enabled=[]
	I0314 00:42:11.025763   49120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:42:11.041692   49120 node_ready.go:35] waiting up to 6m0s for node "pause-501107" to be "Ready" ...
	I0314 00:42:11.044573   49120 node_ready.go:49] node "pause-501107" has status "Ready":"True"
	I0314 00:42:11.044601   49120 node_ready.go:38] duration metric: took 2.879204ms for node "pause-501107" to be "Ready" ...
	I0314 00:42:11.044608   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:11.057098   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.454422   49120 pod_ready.go:92] pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:11.454454   49120 pod_ready.go:81] duration metric: took 397.332452ms for pod "coredns-5dd5756b68-wpvxx" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.454467   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.854039   49120 pod_ready.go:92] pod "etcd-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:11.854062   49120 pod_ready.go:81] duration metric: took 399.587177ms for pod "etcd-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:11.854079   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.255078   49120 pod_ready.go:92] pod "kube-apiserver-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:12.255103   49120 pod_ready.go:81] duration metric: took 401.01813ms for pod "kube-apiserver-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.255113   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.653061   49120 pod_ready.go:92] pod "kube-controller-manager-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:12.653096   49120 pod_ready.go:81] duration metric: took 397.975373ms for pod "kube-controller-manager-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:12.653108   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.053400   49120 pod_ready.go:92] pod "kube-proxy-rb9kh" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:13.053427   49120 pod_ready.go:81] duration metric: took 400.311039ms for pod "kube-proxy-rb9kh" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.053439   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.455577   49120 pod_ready.go:92] pod "kube-scheduler-pause-501107" in "kube-system" namespace has status "Ready":"True"
	I0314 00:42:13.455603   49120 pod_ready.go:81] duration metric: took 402.156969ms for pod "kube-scheduler-pause-501107" in "kube-system" namespace to be "Ready" ...
	I0314 00:42:13.455612   49120 pod_ready.go:38] duration metric: took 2.410991413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:42:13.455672   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:42:13.455729   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:42:13.470208   49120 api_server.go:72] duration metric: took 2.652629103s to wait for apiserver process to appear ...
	I0314 00:42:13.470239   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:42:13.470260   49120 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0314 00:42:13.477249   49120 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0314 00:42:13.480606   49120 api_server.go:141] control plane version: v1.28.4
	I0314 00:42:13.480629   49120 api_server.go:131] duration metric: took 10.381847ms to wait for apiserver health ...
	I0314 00:42:13.480640   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:42:13.657944   49120 system_pods.go:59] 6 kube-system pods found
	I0314 00:42:13.657976   49120 system_pods.go:61] "coredns-5dd5756b68-wpvxx" [fbf69bb2-2b46-4c05-8ddc-85b3853135bc] Running
	I0314 00:42:13.657982   49120 system_pods.go:61] "etcd-pause-501107" [4c4fa65f-568a-4b53-94d3-6b8182e159c4] Running
	I0314 00:42:13.657988   49120 system_pods.go:61] "kube-apiserver-pause-501107" [d2f6f361-722e-41c4-9fa3-8a36f06e7a71] Running
	I0314 00:42:13.657994   49120 system_pods.go:61] "kube-controller-manager-pause-501107" [2bd77fd2-8422-4a18-8e66-f8d065209bbb] Running
	I0314 00:42:13.657997   49120 system_pods.go:61] "kube-proxy-rb9kh" [590a1416-591e-4be6-a96a-907165b4bb81] Running
	I0314 00:42:13.658002   49120 system_pods.go:61] "kube-scheduler-pause-501107" [77e15395-49cf-4302-84c1-4f8f0d21cf9f] Running
	I0314 00:42:13.658010   49120 system_pods.go:74] duration metric: took 177.362445ms to wait for pod list to return data ...
	I0314 00:42:13.658026   49120 default_sa.go:34] waiting for default service account to be created ...
	I0314 00:42:13.855214   49120 default_sa.go:45] found service account: "default"
	I0314 00:42:13.855241   49120 default_sa.go:55] duration metric: took 197.205194ms for default service account to be created ...
	I0314 00:42:13.855253   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 00:42:14.057563   49120 system_pods.go:86] 6 kube-system pods found
	I0314 00:42:14.057592   49120 system_pods.go:89] "coredns-5dd5756b68-wpvxx" [fbf69bb2-2b46-4c05-8ddc-85b3853135bc] Running
	I0314 00:42:14.057600   49120 system_pods.go:89] "etcd-pause-501107" [4c4fa65f-568a-4b53-94d3-6b8182e159c4] Running
	I0314 00:42:14.057607   49120 system_pods.go:89] "kube-apiserver-pause-501107" [d2f6f361-722e-41c4-9fa3-8a36f06e7a71] Running
	I0314 00:42:14.057613   49120 system_pods.go:89] "kube-controller-manager-pause-501107" [2bd77fd2-8422-4a18-8e66-f8d065209bbb] Running
	I0314 00:42:14.057629   49120 system_pods.go:89] "kube-proxy-rb9kh" [590a1416-591e-4be6-a96a-907165b4bb81] Running
	I0314 00:42:14.057635   49120 system_pods.go:89] "kube-scheduler-pause-501107" [77e15395-49cf-4302-84c1-4f8f0d21cf9f] Running
	I0314 00:42:14.057643   49120 system_pods.go:126] duration metric: took 202.383041ms to wait for k8s-apps to be running ...
	I0314 00:42:14.057652   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 00:42:14.057705   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:42:14.078695   49120 system_svc.go:56] duration metric: took 21.034303ms WaitForService to wait for kubelet
	I0314 00:42:14.078723   49120 kubeadm.go:576] duration metric: took 3.261145926s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:42:14.078779   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:42:14.254039   49120 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:42:14.254064   49120 node_conditions.go:123] node cpu capacity is 2
	I0314 00:42:14.254075   49120 node_conditions.go:105] duration metric: took 175.289399ms to run NodePressure ...
	I0314 00:42:14.254087   49120 start.go:240] waiting for startup goroutines ...
	I0314 00:42:14.254095   49120 start.go:245] waiting for cluster config update ...
	I0314 00:42:14.254112   49120 start.go:254] writing updated cluster config ...
	I0314 00:42:14.254382   49120 ssh_runner.go:195] Run: rm -f paused
	I0314 00:42:14.302155   49120 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 00:42:14.304313   49120 out.go:177] * Done! kubectl is now configured to use "pause-501107" cluster and "default" namespace by default
	I0314 00:42:09.978145   48503 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:42:09.978403   48503 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.146868406Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wpvxx,Uid:fbf69bb2-2b46-4c05-8ddc-85b3853135bc,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910833701275,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:32.301873448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-501107,Uid:90833fcc9599b44aa6218e4f9b67bc85,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910789822332,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90833fcc9599b44aa6218e4f9b67bc85,kubernetes.io/config.seen: 2024-03-14T00:40:18.608651308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-501107,Uid:97e90dbb56f55f08b36551e3e1ee98f1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910772636763,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56f55
f08b36551e3e1ee98f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97e90dbb56f55f08b36551e3e1ee98f1,kubernetes.io/config.seen: 2024-03-14T00:40:18.608652076Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-501107,Uid:eecb822b291100c874f48db4490230ec,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910751842791,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f48db4490230ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: eecb822b291100c874f48db4490230ec,kubernetes.io/config.seen: 2024-03-14T00:40:18.608650233Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&PodSandboxMetadata{Name:kube-proxy-rb9kh,Uid:590a1416-591e-4be6-a96a-907165b4bb81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910639748824,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:31.885183245Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&PodSandboxMetadata{Name:etcd-pause-501107,Uid:a9bb41914dbba8a07ea51e5de653db74,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710376910616691572,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a9bb41914dbba8a07ea51e5de653db74,kubernetes.io/config.seen: 2024-03-14T00:40:18.608646607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wpvxx,Uid:fbf69bb2-2b46-4c05-8ddc-85b3853135bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908311078279,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-03-14T00:40:32.301873448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-rb9kh,Uid:590a1416-591e-4be6-a96a-907165b4bb81,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908284170179,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:40:31.885183245Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&PodSandboxMetadata{Name:etcd-pause-501107,Uid:a9bb41914dbba8a07ea51e5de653db74,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:171
0376908276332122,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a9bb41914dbba8a07ea51e5de653db74,kubernetes.io/config.seen: 2024-03-14T00:40:18.608646607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-501107,Uid:90833fcc9599b44aa6218e4f9b67bc85,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908271850161,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 90833fcc9599b44aa6218e4f9b67bc85,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90833fcc9599b44aa6218e4f9b67bc85,kubernetes.io/config.seen: 2024-03-14T00:40:18.608651308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-501107,Uid:97e90dbb56f55f08b36551e3e1ee98f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908192411367,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97e90dbb56f55f08b36551e3e1ee98f1,kubernetes.io/config.seen: 2024-03-14T00:40:18.608652076Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40e450041062143a09ac261eb
7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-501107,Uid:eecb822b291100c874f48db4490230ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710376908126549022,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f48db4490230ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: eecb822b291100c874f48db4490230ec,kubernetes.io/config.seen: 2024-03-14T00:40:18.608650233Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fd399570-129d-4185-a72d-359cb3f887a0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.147984296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=754cc4c7-d98c-4683-9812-9c161e4f1c3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.148036089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=754cc4c7-d98c-4683-9812-9c161e4f1c3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.148348763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=754cc4c7-d98c-4683-9812-9c161e4f1c3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.210187617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=800b3ec9-6842-4418-b0a6-2b196d21665b name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.210345929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=800b3ec9-6842-4418-b0a6-2b196d21665b name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.211714187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51852131-4bbc-427f-9ca8-8f57c69a5a64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.212881168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376937212836577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51852131-4bbc-427f-9ca8-8f57c69a5a64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.213679913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=615eedbe-8639-479b-8ed5-eb07e64db975 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.213788171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=615eedbe-8639-479b-8ed5-eb07e64db975 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.214231825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=615eedbe-8639-479b-8ed5-eb07e64db975 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.267754278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b26e803b-f822-4512-a3bd-a0d35b19b43a name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.267991386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b26e803b-f822-4512-a3bd-a0d35b19b43a name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.269081077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4a8da09-a846-45e4-88cb-b70dfbdfdb1c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.269532436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376937269508251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4a8da09-a846-45e4-88cb-b70dfbdfdb1c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.270069384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13ffe57f-5b0c-4b45-acd4-5638fa8905d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.270180705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13ffe57f-5b0c-4b45-acd4-5638fa8905d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.270469129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13ffe57f-5b0c-4b45-acd4-5638fa8905d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.323830353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2d7a26d-a366-4c0d-bb00-b8ea1d79ca20 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.323967583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2d7a26d-a366-4c0d-bb00-b8ea1d79ca20 name=/runtime.v1.RuntimeService/Version
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.326055861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1912711c-7a35-4058-ac2e-2a4df8d4476f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.326768217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710376937326732664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1912711c-7a35-4058-ac2e-2a4df8d4476f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.327692979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f6fbd24-a367-4811-943a-396366f5105d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.327748468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f6fbd24-a367-4811-943a-396366f5105d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 00:42:17 pause-501107 crio[2766]: time="2024-03-14 00:42:17.328003667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e,PodSandboxId:39685797b16c79da609747c30f0224e7660204849c5078bc90a4656f0bfb1a5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710376919195751296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1,PodSandboxId:7b3954bc9895153cb3000c648dfd9dbcc65ddc315332fbbd2243901fce413ec9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710376919191674163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680,PodSandboxId:0e4fa687e4a3c32fd476bfa45b5972f222a3c42988183fa9f324e32bdb3a0ee4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710376914596456774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e90dbb56
f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286,PodSandboxId:ab2de5f80237848af6fbfa3a0247be5985dd7e0b0b0f764002090602e96a8d2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710376914554818214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
0833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7,PodSandboxId:246ac438d9e6a15499557e851bda6274db2faf33f252325263c136ad2245012a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710376914597636600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb822b291100c874f
48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9,PodSandboxId:fffb7517675f5de261500c7902d65af5a2214ab3f432631244e62ec966853d53,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710376914507724701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io
.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc,PodSandboxId:a5c729b746c570e4f5cdb36bc03e4f2e377e874d63ae24117b503c79a60e20b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710376909653947338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wpvxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf69bb2-2b46-4c05-8ddc-85b3853135bc,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7e
021,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7,PodSandboxId:9b9a0114991f5a81699c7687a902ec654e8b4912aad278e3dc25ec2efa24f6b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710376908993566059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rb9kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a1416-591e-4be6-a96a-907165b4bb81,},Annotations:map[string]string{io.kubernetes.container.hash: dcf8e849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526,PodSandboxId:a505210a89e6e817da678633ebb48adae43d48cb34815d8913d191576d15eaf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710376908930921217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-501107,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a9bb41914dbba8a07ea51e5de653db74,},Annotations:map[string]string{io.kubernetes.container.hash: 325b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61,PodSandboxId:1ef1204dc0777f3991ef37f4302035cb4b26ea673bd3153e5e7724971df600af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710376908532775426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-501107,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 97e90dbb56f55f08b36551e3e1ee98f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716,PodSandboxId:e30ba0537bb2a7643f08dac541d78346935dc46a06d209cc3d247a7772f3639a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710376908743403425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-501107,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 90833fcc9599b44aa6218e4f9b67bc85,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5,PodSandboxId:40e450041062143a09ac261eb7b95c2341d27e66e79352b1b4f47d79d927a9f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710376908396991929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-501107,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: eecb822b291100c874f48db4490230ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4f0d140e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f6fbd24-a367-4811-943a-396366f5105d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f362b6b9c3885       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 seconds ago      Running             kube-proxy                2                   39685797b16c7       kube-proxy-rb9kh
	f4e00189b9412       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago      Running             coredns                   2                   7b3954bc98951       coredns-5dd5756b68-wpvxx
	4ff8ec8d3d3cb       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   22 seconds ago      Running             kube-apiserver            2                   246ac438d9e6a       kube-apiserver-pause-501107
	9618edc80e824       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   22 seconds ago      Running             kube-scheduler            2                   0e4fa687e4a3c       kube-scheduler-pause-501107
	12760e213c97f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   22 seconds ago      Running             kube-controller-manager   2                   ab2de5f802378       kube-controller-manager-pause-501107
	8cba6feac71d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   22 seconds ago      Running             etcd                      2                   fffb7517675f5       etcd-pause-501107
	564c6d1c26294       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   27 seconds ago      Exited              coredns                   1                   a5c729b746c57       coredns-5dd5756b68-wpvxx
	4adb0ce5b1da8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   28 seconds ago      Exited              kube-proxy                1                   9b9a0114991f5       kube-proxy-rb9kh
	2db445d273788       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   28 seconds ago      Exited              etcd                      1                   a505210a89e6e       etcd-pause-501107
	c9dcbb0520559       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   28 seconds ago      Exited              kube-controller-manager   1                   e30ba0537bb2a       kube-controller-manager-pause-501107
	02725e3ead705       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   28 seconds ago      Exited              kube-scheduler            1                   1ef1204dc0777       kube-scheduler-pause-501107
	e5a9980abc19f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   29 seconds ago      Exited              kube-apiserver            1                   40e4500410621       kube-apiserver-pause-501107
	
	
	==> coredns [564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc] <==
	
	
	==> coredns [f4e00189b9412925e7f376b59a9abcd03a04dcced681f35361afdeecb146bad1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35216 - 29426 "HINFO IN 5898812746197343338.95170553043417064. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011543277s
	
	
	==> describe nodes <==
	Name:               pause-501107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-501107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=pause-501107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_40_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-501107
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:42:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:41:58 +0000   Thu, 14 Mar 2024 00:40:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    pause-501107
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 18435ec4adf14ebe95268b7da54269e5
	  System UUID:                18435ec4-adf1-4ebe-9526-8b7da54269e5
	  Boot ID:                    8818f262-fa38-48c0-8dc2-cf35cc50bad3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-wpvxx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-501107                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-pause-501107             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-pause-501107    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-proxy-rb9kh                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-501107             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeReady                118s                 kubelet          Node pause-501107 status is now: NodeReady
	  Normal  RegisteredNode           106s                 node-controller  Node pause-501107 event: Registered Node pause-501107 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)    kubelet          Node pause-501107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet          Node pause-501107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)    kubelet          Node pause-501107 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-501107 event: Registered Node pause-501107 in Controller
	
	
	==> dmesg <==
	[  +0.058485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057593] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.181979] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.146500] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.262491] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +5.162961] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +0.064168] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.400962] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.600349] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.663670] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.090602] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.894060] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +0.062965] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.624783] kauditd_printk_skb: 78 callbacks suppressed
	[Mar14 00:41] systemd-fstab-generator[2373]: Ignoring "noauto" option for root device
	[  +0.332233] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.387454] systemd-fstab-generator[2609]: Ignoring "noauto" option for root device
	[  +0.210136] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.462537] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[  +1.573944] systemd-fstab-generator[3241]: Ignoring "noauto" option for root device
	[  +2.141169] systemd-fstab-generator[3366]: Ignoring "noauto" option for root device
	[  +0.081190] kauditd_printk_skb: 236 callbacks suppressed
	[  +5.592395] kauditd_printk_skb: 38 callbacks suppressed
	[Mar14 00:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.165336] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	
	
	==> etcd [2db445d273788ce981eb36c95b1fa652d13ea17ae0b2c66f57850e2b21523526] <==
	{"level":"info","ts":"2024-03-14T00:41:49.666282Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"50.88387ms"}
	{"level":"info","ts":"2024-03-14T00:41:49.730719Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-14T00:41:49.822778Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","commit-index":474}
	{"level":"info","ts":"2024-03-14T00:41:49.822982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-14T00:41:49.823066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became follower at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:49.823111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ba3e3e863cacc4d [peers: [], term: 2, commit: 474, applied: 0, lastindex: 474, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-14T00:41:49.837527Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-14T00:41:49.869316Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":450}
	{"level":"info","ts":"2024-03-14T00:41:49.923504Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-14T00:41:49.978545Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ba3e3e863cacc4d","timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:41:49.978856Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ba3e3e863cacc4d"}
	{"level":"info","ts":"2024-03-14T00:41:49.978934Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ba3e3e863cacc4d","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-14T00:41:50.001034Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:41:50.009314Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-14T00:41:50.009522Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009586Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009599Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:50.009943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-03-14T00:41:50.010057Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-03-14T00:41:50.010284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:50.010344Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:50.037401Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:50.037464Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:50.037734Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:41:50.037803Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [8cba6feac71d4bb6d2cda5977defb2c01f884e12b124073de96857ecac9eacf9] <==
	{"level":"info","ts":"2024-03-14T00:41:54.916517Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:54.916528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:41:54.916788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-03-14T00:41:54.916895Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-03-14T00:41:54.91704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:54.917066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:41:54.934228Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:54.934326Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-03-14T00:41:54.930021Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T00:41:54.936492Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T00:41:54.936555Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T00:41:56.5681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 2"}
	{"level":"info","ts":"2024-03-14T00:41:56.568405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.568481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba3e3e863cacc4d elected leader ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-03-14T00:41:56.570501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:41:56.572233Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ba3e3e863cacc4d","local-member-attributes":"{Name:pause-501107 ClientURLs:[https://192.168.39.149:2379]}","request-path":"/0/members/ba3e3e863cacc4d/attributes","cluster-id":"65f5490397676253","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:41:56.572617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:41:56.572755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.149:2379"}
	{"level":"info","ts":"2024-03-14T00:41:56.573598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:41:56.573824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:41:56.573868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:42:17 up 2 min,  0 users,  load average: 0.26, 0.13, 0.05
	Linux pause-501107 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ff8ec8d3d3cbd27d9be78021b59724268d26ae108ca3c6e67fb06d3f9e821e7] <==
	I0314 00:41:58.075423       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 00:41:58.109251       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0314 00:41:58.109287       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0314 00:41:58.183682       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 00:41:58.183816       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 00:41:58.183838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 00:41:58.192490       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 00:41:58.197513       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 00:41:58.209331       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 00:41:58.209860       1 aggregator.go:166] initial CRD sync complete...
	I0314 00:41:58.209911       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 00:41:58.209919       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 00:41:58.209925       1 cache.go:39] Caches are synced for autoregister controller
	E0314 00:41:58.228894       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0314 00:41:58.244385       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 00:41:58.252662       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 00:41:58.252694       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 00:41:59.050037       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 00:42:00.062300       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 00:42:00.084227       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 00:42:00.147411       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 00:42:00.190415       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 00:42:00.198388       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 00:42:10.559741       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 00:42:10.712009       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5] <==
	I0314 00:41:49.256481       1 options.go:220] external host was not specified, using 192.168.39.149
	I0314 00:41:49.296499       1 server.go:148] Version: v1.28.4
	I0314 00:41:49.296596       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [12760e213c97f2ab352485691401e271890fa2533db56f9d47204d6e42957286] <==
	I0314 00:42:10.565738       1 range_allocator.go:174] "Sending events to api server"
	I0314 00:42:10.565785       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0314 00:42:10.565810       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0314 00:42:10.565833       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0314 00:42:10.568040       1 shared_informer.go:318] Caches are synced for daemon sets
	I0314 00:42:10.568315       1 shared_informer.go:318] Caches are synced for expand
	I0314 00:42:10.573387       1 shared_informer.go:318] Caches are synced for deployment
	I0314 00:42:10.574081       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0314 00:42:10.574341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="152.069µs"
	I0314 00:42:10.576939       1 shared_informer.go:318] Caches are synced for stateful set
	I0314 00:42:10.578168       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0314 00:42:10.579423       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0314 00:42:10.580735       1 shared_informer.go:318] Caches are synced for ephemeral
	I0314 00:42:10.582094       1 shared_informer.go:318] Caches are synced for GC
	I0314 00:42:10.587512       1 shared_informer.go:318] Caches are synced for PV protection
	I0314 00:42:10.589878       1 shared_informer.go:318] Caches are synced for HPA
	I0314 00:42:10.595203       1 shared_informer.go:318] Caches are synced for persistent volume
	I0314 00:42:10.597729       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0314 00:42:10.601024       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0314 00:42:10.649466       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 00:42:10.678722       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 00:42:10.750629       1 shared_informer.go:318] Caches are synced for attach detach
	I0314 00:42:11.106885       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 00:42:11.107029       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 00:42:11.109203       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716] <==
	
	
	==> kube-proxy [4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7] <==
	
	
	==> kube-proxy [f362b6b9c388537fb9265a8d5254fd52299221402541224efae8d619df85674e] <==
	I0314 00:41:59.427996       1 server_others.go:69] "Using iptables proxy"
	I0314 00:41:59.438598       1 node.go:141] Successfully retrieved node IP: 192.168.39.149
	I0314 00:41:59.480549       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:41:59.480635       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:41:59.483093       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:41:59.483268       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:41:59.483575       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:41:59.483602       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:41:59.485249       1 config.go:188] "Starting service config controller"
	I0314 00:41:59.486313       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:41:59.486430       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:41:59.486451       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:41:59.487654       1 config.go:315] "Starting node config controller"
	I0314 00:41:59.487693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:41:59.586850       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:41:59.586863       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:41:59.588365       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61] <==
	
	
	==> kube-scheduler [9618edc80e82421dc90e3c7c666830d9315f44196fa10dcabe3a255c65c1d680] <==
	I0314 00:41:55.783884       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:41:58.122680       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:41:58.122745       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:41:58.122756       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:41:58.122763       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:41:58.201828       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:41:58.201877       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:41:58.208184       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:41:58.208241       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:41:58.211901       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:41:58.212041       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:41:58.309262       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.491715    3373 scope.go:117] "RemoveContainer" containerID="e5a9980abc19fcb8c840dbb9799d3de518ef33585cda7f448409998c7a85dbd5"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.495036    3373 scope.go:117] "RemoveContainer" containerID="c9dcbb05205596b1feb1dd58a5099aa834f378b05a5767e04eafbc4a72cae716"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.496242    3373 scope.go:117] "RemoveContainer" containerID="02725e3ead70526b03287924b07f3153e75d8a2e487c2f74233b3fed554a3f61"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: I0314 00:41:54.578817    3373 kubelet_node_status.go:70] "Attempting to register node" node="pause-501107"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.580694    3373 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.149:8443: connect: connection refused" node="pause-501107"
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.707036    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-501107&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.707105    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-501107&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.707524    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.707785    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: W0314 00:41:54.784613    3373 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:54 pause-501107 kubelet[3373]: E0314 00:41:54.784696    3373 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	Mar 14 00:41:55 pause-501107 kubelet[3373]: I0314 00:41:55.382039    3373 kubelet_node_status.go:70] "Attempting to register node" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.224861    3373 kubelet_node_status.go:108] "Node was previously registered" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.224998    3373 kubelet_node_status.go:73] "Successfully registered node" node="pause-501107"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.232516    3373 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.234003    3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.847451    3373 apiserver.go:52] "Watching apiserver"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.850667    3373 topology_manager.go:215] "Topology Admit Handler" podUID="590a1416-591e-4be6-a96a-907165b4bb81" podNamespace="kube-system" podName="kube-proxy-rb9kh"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.850893    3373 topology_manager.go:215] "Topology Admit Handler" podUID="fbf69bb2-2b46-4c05-8ddc-85b3853135bc" podNamespace="kube-system" podName="coredns-5dd5756b68-wpvxx"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.865544    3373 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.951095    3373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/590a1416-591e-4be6-a96a-907165b4bb81-xtables-lock\") pod \"kube-proxy-rb9kh\" (UID: \"590a1416-591e-4be6-a96a-907165b4bb81\") " pod="kube-system/kube-proxy-rb9kh"
	Mar 14 00:41:58 pause-501107 kubelet[3373]: I0314 00:41:58.951362    3373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/590a1416-591e-4be6-a96a-907165b4bb81-lib-modules\") pod \"kube-proxy-rb9kh\" (UID: \"590a1416-591e-4be6-a96a-907165b4bb81\") " pod="kube-system/kube-proxy-rb9kh"
	Mar 14 00:41:59 pause-501107 kubelet[3373]: I0314 00:41:59.151968    3373 scope.go:117] "RemoveContainer" containerID="564c6d1c26294d9e13c92ef918498427fc6d10ee56d848841c9ad11172aa1ebc"
	Mar 14 00:41:59 pause-501107 kubelet[3373]: I0314 00:41:59.152581    3373 scope.go:117] "RemoveContainer" containerID="4adb0ce5b1da890d2eefe281c08c7ad6148d1b18ffc856406dc43b892368e2a7"
	Mar 14 00:42:06 pause-501107 kubelet[3373]: I0314 00:42:06.160532    3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-501107 -n pause-501107
helpers_test.go:261: (dbg) Run:  kubectl --context pause-501107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (306.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m6.340990506s)

                                                
                                                
-- stdout --
	* [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:47:12.590290   58903 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:47:12.590574   58903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:47:12.590585   58903 out.go:304] Setting ErrFile to fd 2...
	I0314 00:47:12.590589   58903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:47:12.590858   58903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:47:12.591556   58903 out.go:298] Setting JSON to false
	I0314 00:47:12.592661   58903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5376,"bootTime":1710371857,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:47:12.592728   58903 start.go:139] virtualization: kvm guest
	I0314 00:47:12.595115   58903 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:47:12.596918   58903 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:47:12.598313   58903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:47:12.596914   58903 notify.go:220] Checking for updates...
	I0314 00:47:12.600008   58903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:47:12.601420   58903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:47:12.602879   58903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:47:12.604408   58903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:47:12.606142   58903 config.go:182] Loaded profile config "bridge-326260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:47:12.606260   58903 config.go:182] Loaded profile config "enable-default-cni-326260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:47:12.606351   58903 config.go:182] Loaded profile config "flannel-326260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:47:12.606467   58903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:47:12.644404   58903 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:47:12.645672   58903 start.go:297] selected driver: kvm2
	I0314 00:47:12.645684   58903 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:47:12.645698   58903 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:47:12.646407   58903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:47:12.646489   58903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:47:12.662581   58903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:47:12.662631   58903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:47:12.662918   58903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:47:12.662950   58903 cni.go:84] Creating CNI manager for ""
	I0314 00:47:12.662958   58903 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:47:12.662967   58903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 00:47:12.663013   58903 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:47:12.663109   58903 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:47:12.665052   58903 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:47:12.666713   58903 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:47:12.666756   58903 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:47:12.666783   58903 cache.go:56] Caching tarball of preloaded images
	I0314 00:47:12.666884   58903 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:47:12.666896   58903 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:47:12.667000   58903 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:47:12.667027   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json: {Name:mk572591233ba21ac399c256edaa21a1b118ecb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:47:12.667158   58903 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:47:44.328231   58903 start.go:364] duration metric: took 31.661035787s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:47:44.328289   58903 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:47:44.328448   58903 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 00:47:44.331940   58903 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 00:47:44.332163   58903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:47:44.332213   58903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:47:44.350107   58903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0314 00:47:44.350614   58903 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:47:44.351185   58903 main.go:141] libmachine: Using API Version  1
	I0314 00:47:44.351210   58903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:47:44.351542   58903 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:47:44.351721   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:47:44.351869   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:47:44.352078   58903 start.go:159] libmachine.API.Create for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:47:44.352109   58903 client.go:168] LocalClient.Create starting
	I0314 00:47:44.352145   58903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0314 00:47:44.352184   58903 main.go:141] libmachine: Decoding PEM data...
	I0314 00:47:44.352206   58903 main.go:141] libmachine: Parsing certificate...
	I0314 00:47:44.352277   58903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0314 00:47:44.352300   58903 main.go:141] libmachine: Decoding PEM data...
	I0314 00:47:44.352319   58903 main.go:141] libmachine: Parsing certificate...
	I0314 00:47:44.352342   58903 main.go:141] libmachine: Running pre-create checks...
	I0314 00:47:44.352355   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .PreCreateCheck
	I0314 00:47:44.352709   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:47:44.353162   58903 main.go:141] libmachine: Creating machine...
	I0314 00:47:44.353179   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .Create
	I0314 00:47:44.353310   58903 main.go:141] libmachine: (old-k8s-version-004791) Creating KVM machine...
	I0314 00:47:44.354794   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found existing default KVM network
	I0314 00:47:44.356187   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.355997   59210 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:af:ff} reservation:<nil>}
	I0314 00:47:44.357093   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.357000   59210 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:7b:3c} reservation:<nil>}
	I0314 00:47:44.358079   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.357967   59210 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:77:b5:68} reservation:<nil>}
	I0314 00:47:44.359143   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.359042   59210 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003672f0}
	I0314 00:47:44.359216   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | created network xml: 
	I0314 00:47:44.359237   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | <network>
	I0314 00:47:44.359265   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   <name>mk-old-k8s-version-004791</name>
	I0314 00:47:44.359302   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   <dns enable='no'/>
	I0314 00:47:44.359312   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   
	I0314 00:47:44.359321   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 00:47:44.359330   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |     <dhcp>
	I0314 00:47:44.359340   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 00:47:44.359348   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |     </dhcp>
	I0314 00:47:44.359367   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   </ip>
	I0314 00:47:44.359375   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG |   
	I0314 00:47:44.359388   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | </network>
	I0314 00:47:44.359398   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | 
	I0314 00:47:44.364710   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | trying to create private KVM network mk-old-k8s-version-004791 192.168.72.0/24...
	I0314 00:47:44.440210   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | private KVM network mk-old-k8s-version-004791 192.168.72.0/24 created
	I0314 00:47:44.440258   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.440192   59210 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:47:44.440279   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791 ...
	I0314 00:47:44.440293   58903 main.go:141] libmachine: (old-k8s-version-004791) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 00:47:44.440477   58903 main.go:141] libmachine: (old-k8s-version-004791) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 00:47:44.683610   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.683474   59210 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa...
	I0314 00:47:44.977913   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.977771   59210 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/old-k8s-version-004791.rawdisk...
	I0314 00:47:44.977952   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Writing magic tar header
	I0314 00:47:44.977983   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Writing SSH key tar header
	I0314 00:47:44.977997   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:44.977919   59210 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791 ...
	I0314 00:47:44.978087   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791
	I0314 00:47:44.978118   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0314 00:47:44.978132   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791 (perms=drwx------)
	I0314 00:47:44.978148   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0314 00:47:44.978164   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:47:44.978176   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0314 00:47:44.978504   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0314 00:47:44.978556   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0314 00:47:44.978569   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 00:47:44.978843   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 00:47:44.978867   58903 main.go:141] libmachine: (old-k8s-version-004791) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 00:47:44.978895   58903 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:47:44.978936   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home/jenkins
	I0314 00:47:44.978961   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Checking permissions on dir: /home
	I0314 00:47:44.978985   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Skipping /home - not owner
	I0314 00:47:44.980356   58903 main.go:141] libmachine: (old-k8s-version-004791) define libvirt domain using xml: 
	I0314 00:47:44.980382   58903 main.go:141] libmachine: (old-k8s-version-004791) <domain type='kvm'>
	I0314 00:47:44.980399   58903 main.go:141] libmachine: (old-k8s-version-004791)   <name>old-k8s-version-004791</name>
	I0314 00:47:44.980408   58903 main.go:141] libmachine: (old-k8s-version-004791)   <memory unit='MiB'>2200</memory>
	I0314 00:47:44.980419   58903 main.go:141] libmachine: (old-k8s-version-004791)   <vcpu>2</vcpu>
	I0314 00:47:44.980435   58903 main.go:141] libmachine: (old-k8s-version-004791)   <features>
	I0314 00:47:44.980502   58903 main.go:141] libmachine: (old-k8s-version-004791)     <acpi/>
	I0314 00:47:44.980522   58903 main.go:141] libmachine: (old-k8s-version-004791)     <apic/>
	I0314 00:47:44.980541   58903 main.go:141] libmachine: (old-k8s-version-004791)     <pae/>
	I0314 00:47:44.980555   58903 main.go:141] libmachine: (old-k8s-version-004791)     
	I0314 00:47:44.980572   58903 main.go:141] libmachine: (old-k8s-version-004791)   </features>
	I0314 00:47:44.980584   58903 main.go:141] libmachine: (old-k8s-version-004791)   <cpu mode='host-passthrough'>
	I0314 00:47:44.980596   58903 main.go:141] libmachine: (old-k8s-version-004791)   
	I0314 00:47:44.980611   58903 main.go:141] libmachine: (old-k8s-version-004791)   </cpu>
	I0314 00:47:44.980706   58903 main.go:141] libmachine: (old-k8s-version-004791)   <os>
	I0314 00:47:44.980725   58903 main.go:141] libmachine: (old-k8s-version-004791)     <type>hvm</type>
	I0314 00:47:44.980744   58903 main.go:141] libmachine: (old-k8s-version-004791)     <boot dev='cdrom'/>
	I0314 00:47:44.980753   58903 main.go:141] libmachine: (old-k8s-version-004791)     <boot dev='hd'/>
	I0314 00:47:44.980773   58903 main.go:141] libmachine: (old-k8s-version-004791)     <bootmenu enable='no'/>
	I0314 00:47:44.980782   58903 main.go:141] libmachine: (old-k8s-version-004791)   </os>
	I0314 00:47:44.980791   58903 main.go:141] libmachine: (old-k8s-version-004791)   <devices>
	I0314 00:47:44.980805   58903 main.go:141] libmachine: (old-k8s-version-004791)     <disk type='file' device='cdrom'>
	I0314 00:47:44.980819   58903 main.go:141] libmachine: (old-k8s-version-004791)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/boot2docker.iso'/>
	I0314 00:47:44.980828   58903 main.go:141] libmachine: (old-k8s-version-004791)       <target dev='hdc' bus='scsi'/>
	I0314 00:47:44.980843   58903 main.go:141] libmachine: (old-k8s-version-004791)       <readonly/>
	I0314 00:47:44.980851   58903 main.go:141] libmachine: (old-k8s-version-004791)     </disk>
	I0314 00:47:44.980866   58903 main.go:141] libmachine: (old-k8s-version-004791)     <disk type='file' device='disk'>
	I0314 00:47:44.980877   58903 main.go:141] libmachine: (old-k8s-version-004791)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 00:47:44.980907   58903 main.go:141] libmachine: (old-k8s-version-004791)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/old-k8s-version-004791.rawdisk'/>
	I0314 00:47:44.980934   58903 main.go:141] libmachine: (old-k8s-version-004791)       <target dev='hda' bus='virtio'/>
	I0314 00:47:44.980943   58903 main.go:141] libmachine: (old-k8s-version-004791)     </disk>
	I0314 00:47:44.980988   58903 main.go:141] libmachine: (old-k8s-version-004791)     <interface type='network'>
	I0314 00:47:44.981002   58903 main.go:141] libmachine: (old-k8s-version-004791)       <source network='mk-old-k8s-version-004791'/>
	I0314 00:47:44.981021   58903 main.go:141] libmachine: (old-k8s-version-004791)       <model type='virtio'/>
	I0314 00:47:44.981366   58903 main.go:141] libmachine: (old-k8s-version-004791)     </interface>
	I0314 00:47:44.981401   58903 main.go:141] libmachine: (old-k8s-version-004791)     <interface type='network'>
	I0314 00:47:44.981414   58903 main.go:141] libmachine: (old-k8s-version-004791)       <source network='default'/>
	I0314 00:47:44.981426   58903 main.go:141] libmachine: (old-k8s-version-004791)       <model type='virtio'/>
	I0314 00:47:44.981436   58903 main.go:141] libmachine: (old-k8s-version-004791)     </interface>
	I0314 00:47:44.981446   58903 main.go:141] libmachine: (old-k8s-version-004791)     <serial type='pty'>
	I0314 00:47:44.981455   58903 main.go:141] libmachine: (old-k8s-version-004791)       <target port='0'/>
	I0314 00:47:44.981464   58903 main.go:141] libmachine: (old-k8s-version-004791)     </serial>
	I0314 00:47:44.981473   58903 main.go:141] libmachine: (old-k8s-version-004791)     <console type='pty'>
	I0314 00:47:44.981484   58903 main.go:141] libmachine: (old-k8s-version-004791)       <target type='serial' port='0'/>
	I0314 00:47:44.981492   58903 main.go:141] libmachine: (old-k8s-version-004791)     </console>
	I0314 00:47:44.981507   58903 main.go:141] libmachine: (old-k8s-version-004791)     <rng model='virtio'>
	I0314 00:47:44.981518   58903 main.go:141] libmachine: (old-k8s-version-004791)       <backend model='random'>/dev/random</backend>
	I0314 00:47:44.981529   58903 main.go:141] libmachine: (old-k8s-version-004791)     </rng>
	I0314 00:47:44.981543   58903 main.go:141] libmachine: (old-k8s-version-004791)     
	I0314 00:47:44.981553   58903 main.go:141] libmachine: (old-k8s-version-004791)     
	I0314 00:47:44.981562   58903 main.go:141] libmachine: (old-k8s-version-004791)   </devices>
	I0314 00:47:44.981571   58903 main.go:141] libmachine: (old-k8s-version-004791) </domain>
	I0314 00:47:44.981581   58903 main.go:141] libmachine: (old-k8s-version-004791) 
	I0314 00:47:44.988064   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:20:23:3d in network default
	I0314 00:47:44.988865   58903 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:47:44.988892   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:44.989789   58903 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:47:44.990177   58903 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:47:44.990900   58903 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:47:44.991700   58903 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:47:46.504788   58903 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:47:46.506029   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:46.506465   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:46.506498   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:46.506430   59210 retry.go:31] will retry after 217.535758ms: waiting for machine to come up
	I0314 00:47:46.725983   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:46.726749   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:46.726794   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:46.726633   59210 retry.go:31] will retry after 257.690975ms: waiting for machine to come up
	I0314 00:47:46.986272   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:46.986831   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:46.986855   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:46.986791   59210 retry.go:31] will retry after 396.303627ms: waiting for machine to come up
	I0314 00:47:47.384481   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:47.385212   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:47.385245   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:47.385176   59210 retry.go:31] will retry after 407.130219ms: waiting for machine to come up
	I0314 00:47:47.793590   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:47.794311   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:47.794329   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:47.794230   59210 retry.go:31] will retry after 758.888762ms: waiting for machine to come up
	I0314 00:47:48.555415   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:48.556192   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:48.556223   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:48.556108   59210 retry.go:31] will retry after 661.188214ms: waiting for machine to come up
	I0314 00:47:49.219186   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:49.219667   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:49.219690   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:49.219591   59210 retry.go:31] will retry after 1.016208807s: waiting for machine to come up
	I0314 00:47:50.237154   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:50.237872   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:50.237899   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:50.237782   59210 retry.go:31] will retry after 1.323603195s: waiting for machine to come up
	I0314 00:47:51.563511   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:51.564079   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:51.564111   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:51.564018   59210 retry.go:31] will retry after 1.788277837s: waiting for machine to come up
	I0314 00:47:53.354176   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:53.354747   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:53.354797   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:53.354672   59210 retry.go:31] will retry after 1.953524329s: waiting for machine to come up
	I0314 00:47:55.309469   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:55.309974   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:55.310012   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:55.309919   59210 retry.go:31] will retry after 2.194777573s: waiting for machine to come up
	I0314 00:47:57.506572   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:47:57.507320   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:47:57.507339   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:47:57.507255   59210 retry.go:31] will retry after 3.257244537s: waiting for machine to come up
	I0314 00:48:00.766213   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:00.766688   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:48:00.766716   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:48:00.766643   59210 retry.go:31] will retry after 3.681362872s: waiting for machine to come up
	I0314 00:48:04.449887   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:04.450419   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:48:04.450456   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:48:04.450368   59210 retry.go:31] will retry after 4.974259216s: waiting for machine to come up
	I0314 00:48:09.425522   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.426043   58903 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:48:09.426064   58903 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:48:09.426080   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.426531   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791
	I0314 00:48:09.512607   58903 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:48:09.512643   58903 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:48:09.512667   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:48:09.515922   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.516364   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:09.516400   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.516573   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:48:09.516618   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:48:09.516654   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:48:09.516670   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:48:09.516684   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:48:09.647401   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:48:09.647628   58903 main.go:141] libmachine: (old-k8s-version-004791) KVM machine creation complete!
	I0314 00:48:09.648012   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:48:09.648562   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:09.648828   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:09.648994   58903 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 00:48:09.649022   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:48:09.650579   58903 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 00:48:09.650597   58903 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 00:48:09.650604   58903 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 00:48:09.650612   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:09.653078   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.653540   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:09.653569   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.653744   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:09.653926   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.654140   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.654329   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:09.654532   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:09.654759   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:09.654794   58903 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 00:48:09.758581   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:48:09.758605   58903 main.go:141] libmachine: Detecting the provisioner...
	I0314 00:48:09.758619   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:09.761895   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.762290   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:09.762320   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.762491   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:09.762712   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.762903   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.763044   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:09.763244   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:09.763471   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:09.763488   58903 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 00:48:09.868195   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 00:48:09.868284   58903 main.go:141] libmachine: found compatible host: buildroot
	I0314 00:48:09.868298   58903 main.go:141] libmachine: Provisioning with buildroot...
	I0314 00:48:09.868306   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:48:09.868505   58903 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:48:09.868527   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:48:09.868721   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:09.872193   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.872730   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:09.872764   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:09.872920   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:09.873113   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.873330   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:09.873499   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:09.873692   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:09.873933   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:09.873954   58903 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:48:09.997817   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:48:09.997849   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.000765   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.001081   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.001117   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.001521   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.001691   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.001867   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.002014   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.002382   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:10.002577   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:10.002599   58903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:48:10.122408   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:48:10.122445   58903 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:48:10.122484   58903 buildroot.go:174] setting up certificates
	I0314 00:48:10.122500   58903 provision.go:84] configureAuth start
	I0314 00:48:10.122516   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:48:10.122813   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:48:10.126020   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.126492   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.126527   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.126814   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.129560   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.129927   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.129963   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.130196   58903 provision.go:143] copyHostCerts
	I0314 00:48:10.130279   58903 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:48:10.130302   58903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:48:10.130377   58903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:48:10.130504   58903 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:48:10.130517   58903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:48:10.130549   58903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:48:10.130634   58903 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:48:10.130645   58903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:48:10.130672   58903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:48:10.130759   58903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:48:10.195076   58903 provision.go:177] copyRemoteCerts
	I0314 00:48:10.195147   58903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:48:10.195181   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.198495   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.198841   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.198876   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.199067   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.199311   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.199512   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.199689   58903 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:48:10.288509   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:48:10.322353   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:48:10.351254   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:48:10.381172   58903 provision.go:87] duration metric: took 258.654733ms to configureAuth
	I0314 00:48:10.381200   58903 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:48:10.381373   58903 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:48:10.381476   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.385873   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.386332   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.386363   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.386533   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.386783   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.386930   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.387124   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.387298   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:10.387519   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:10.387542   58903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:48:10.708376   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:48:10.708403   58903 main.go:141] libmachine: Checking connection to Docker...
	I0314 00:48:10.708415   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetURL
	I0314 00:48:10.709735   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using libvirt version 6000000
	I0314 00:48:10.712343   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.712749   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.712783   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.713171   58903 main.go:141] libmachine: Docker is up and running!
	I0314 00:48:10.713185   58903 main.go:141] libmachine: Reticulating splines...
	I0314 00:48:10.713193   58903 client.go:171] duration metric: took 26.361073507s to LocalClient.Create
	I0314 00:48:10.713213   58903 start.go:167] duration metric: took 26.361134039s to libmachine.API.Create "old-k8s-version-004791"
	I0314 00:48:10.713223   58903 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:48:10.713235   58903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:48:10.713260   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:10.713543   58903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:48:10.713565   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.716577   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.717063   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.717087   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.717290   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.717468   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.717730   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.717833   58903 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:48:10.807249   58903 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:48:10.812237   58903 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:48:10.812264   58903 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:48:10.812321   58903 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:48:10.812408   58903 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:48:10.812514   58903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:48:10.823610   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:48:10.856142   58903 start.go:296] duration metric: took 142.905682ms for postStartSetup
	I0314 00:48:10.856211   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:48:10.856876   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:48:10.860434   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.860909   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.860972   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.861251   58903 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:48:10.861495   58903 start.go:128] duration metric: took 26.533034368s to createHost
	I0314 00:48:10.861526   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.864163   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.864511   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.864537   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.864723   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.864943   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.865121   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.865328   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.865515   58903 main.go:141] libmachine: Using SSH client type: native
	I0314 00:48:10.865726   58903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:48:10.865745   58903 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 00:48:10.977118   58903 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377290.956347540
	
	I0314 00:48:10.977140   58903 fix.go:216] guest clock: 1710377290.956347540
	I0314 00:48:10.977149   58903 fix.go:229] Guest: 2024-03-14 00:48:10.95634754 +0000 UTC Remote: 2024-03-14 00:48:10.86151171 +0000 UTC m=+58.320332881 (delta=94.83583ms)
	I0314 00:48:10.977181   58903 fix.go:200] guest clock delta is within tolerance: 94.83583ms
	I0314 00:48:10.977189   58903 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 26.648936313s
	I0314 00:48:10.977221   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:10.977525   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:48:10.981086   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.981440   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.981479   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.981778   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:10.982368   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:10.982972   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:48:10.983051   58903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:48:10.983096   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.983169   58903 ssh_runner.go:195] Run: cat /version.json
	I0314 00:48:10.983187   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:48:10.987125   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.987156   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.987193   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.987238   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.987417   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.987571   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:10.987688   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:10.987850   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.987858   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:48:10.988089   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.988125   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:48:10.988248   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:48:10.988252   58903 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:48:10.988378   58903 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:48:11.108074   58903 ssh_runner.go:195] Run: systemctl --version
	I0314 00:48:11.114916   58903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:48:11.280522   58903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:48:11.288908   58903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:48:11.288982   58903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:48:11.308520   58903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:48:11.308546   58903 start.go:494] detecting cgroup driver to use...
	I0314 00:48:11.308617   58903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:48:11.328170   58903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:48:11.348058   58903 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:48:11.348119   58903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:48:11.366369   58903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:48:11.385473   58903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:48:11.556364   58903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:48:11.743768   58903 docker.go:233] disabling docker service ...
	I0314 00:48:11.743843   58903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:48:11.763919   58903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:48:11.779658   58903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:48:11.937831   58903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:48:12.094051   58903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:48:12.112168   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:48:12.133318   58903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:48:12.133394   58903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:48:12.147127   58903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:48:12.147199   58903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:48:12.159002   58903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:48:12.171046   58903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:48:12.185212   58903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:48:12.203061   58903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:48:12.215812   58903 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:48:12.215875   58903 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:48:12.232646   58903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:48:12.247253   58903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:48:12.388337   58903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:48:12.568622   58903 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:48:12.568692   58903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:48:12.573724   58903 start.go:562] Will wait 60s for crictl version
	I0314 00:48:12.573784   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:12.577920   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:48:12.624906   58903 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:48:12.625015   58903 ssh_runner.go:195] Run: crio --version
	I0314 00:48:12.660358   58903 ssh_runner.go:195] Run: crio --version
	I0314 00:48:12.695552   58903 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:48:12.697144   58903 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:48:12.701107   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:12.701702   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:48:12.701733   58903 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:48:12.702166   58903 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:48:12.707544   58903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:48:12.725133   58903 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:48:12.725282   58903 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:48:12.725329   58903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:48:12.767618   58903 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:48:12.767693   58903 ssh_runner.go:195] Run: which lz4
	I0314 00:48:12.772189   58903 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 00:48:12.777046   58903 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:48:12.777081   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:48:14.810165   58903 crio.go:444] duration metric: took 2.038023402s to copy over tarball
	I0314 00:48:14.810232   58903 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:48:17.880940   58903 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.070670486s)
	I0314 00:48:17.880970   58903 crio.go:451] duration metric: took 3.070782902s to extract the tarball
	I0314 00:48:17.880992   58903 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:48:17.927293   58903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:48:17.986913   58903 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:48:17.986936   58903 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:48:17.987004   58903 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:48:17.987224   58903 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:48:17.987242   58903 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:48:17.987337   58903 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:48:17.987444   58903 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:48:17.987455   58903 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:48:17.987629   58903 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:48:17.987761   58903 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:48:17.988714   58903 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:48:17.989221   58903 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:48:17.989304   58903 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:48:17.989317   58903 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:48:17.989449   58903 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:48:17.989519   58903 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:48:17.989569   58903 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:48:17.989578   58903 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:48:18.166240   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:48:18.168969   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:48:18.192717   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:48:18.214054   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:48:18.216042   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:48:18.233296   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:48:18.266471   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:48:18.280305   58903 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:48:18.280342   58903 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:48:18.280392   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.280439   58903 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:48:18.280482   58903 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:48:18.280552   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.384842   58903 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:48:18.384889   58903 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:48:18.384940   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.392150   58903 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:48:18.392191   58903 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:48:18.392200   58903 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:48:18.392222   58903 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:48:18.392247   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.392262   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.419402   58903 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:48:18.419453   58903 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:48:18.419500   58903 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:48:18.419532   58903 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:48:18.419555   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:48:18.419572   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.419507   58903 ssh_runner.go:195] Run: which crictl
	I0314 00:48:18.419658   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:48:18.419707   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:48:18.419768   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:48:18.419768   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:48:18.545587   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:48:18.545700   58903 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:48:18.545779   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:48:18.547791   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:48:18.547854   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:48:18.547893   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:48:18.547918   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:48:18.601479   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:48:18.601637   58903 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:48:18.931691   58903 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:48:19.092364   58903 cache_images.go:92] duration metric: took 1.105412983s to LoadCachedImages
	W0314 00:48:19.092452   58903 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0314 00:48:19.092469   58903 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:48:19.092628   58903 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:48:19.092752   58903 ssh_runner.go:195] Run: crio config
	I0314 00:48:19.166895   58903 cni.go:84] Creating CNI manager for ""
	I0314 00:48:19.166926   58903 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:48:19.166939   58903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:48:19.166956   58903 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:48:19.167136   58903 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:48:19.167208   58903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:48:19.179389   58903 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:48:19.179447   58903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:48:19.192583   58903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:48:19.213996   58903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:48:19.232800   58903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:48:19.264213   58903 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:48:19.269809   58903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:48:19.283677   58903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:48:19.402694   58903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:48:19.425562   58903 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:48:19.425583   58903 certs.go:194] generating shared ca certs ...
	I0314 00:48:19.425603   58903 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:19.425791   58903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:48:19.425863   58903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:48:19.425879   58903 certs.go:256] generating profile certs ...
	I0314 00:48:19.425946   58903 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:48:19.425965   58903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.crt with IP's: []
	I0314 00:48:19.630228   58903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.crt ...
	I0314 00:48:19.630260   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.crt: {Name:mk52acb8c963edd3d31176873e87722ef60e54b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:19.630429   58903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key ...
	I0314 00:48:19.630444   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key: {Name:mk9a17f9098d2f76eaeb2530534086a8ecd29047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:19.630521   58903 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:48:19.630533   58903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt.c57f8e0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0314 00:48:19.982015   58903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt.c57f8e0c ...
	I0314 00:48:19.982044   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt.c57f8e0c: {Name:mka3897309a247fd9661b0ef01c849d4cb206294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:19.982230   58903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c ...
	I0314 00:48:19.982250   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c: {Name:mk454b1e87b55f0d58d5df445c6f3c4f3599efe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:19.982345   58903 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt.c57f8e0c -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt
	I0314 00:48:19.982442   58903 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key
	I0314 00:48:19.982525   58903 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:48:19.982547   58903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt with IP's: []
	I0314 00:48:20.448530   58903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt ...
	I0314 00:48:20.448572   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt: {Name:mk18d03296cd964c63e4c20935807f20b581916d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:20.448744   58903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key ...
	I0314 00:48:20.448759   58903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key: {Name:mk109d6c011225ba90bb9a34e615c5339a9ab2b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:48:20.448963   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:48:20.449001   58903 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:48:20.449009   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:48:20.449033   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:48:20.449056   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:48:20.449078   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:48:20.449114   58903 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:48:20.449782   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:48:20.498969   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:48:20.533781   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:48:20.565271   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:48:20.595448   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:48:20.626569   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:48:20.660387   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:48:20.698165   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:48:20.729299   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:48:20.757829   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:48:20.791793   58903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:48:20.829383   58903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:48:20.860777   58903 ssh_runner.go:195] Run: openssl version
	I0314 00:48:20.875439   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:48:20.898981   58903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:48:20.910504   58903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:48:20.910591   58903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:48:20.920225   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:48:20.937426   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:48:20.953790   58903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:48:20.961248   58903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:48:20.961299   58903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:48:20.968002   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:48:20.987489   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:48:21.005278   58903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:48:21.012198   58903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:48:21.012267   58903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:48:21.021036   58903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:48:21.038179   58903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:48:21.044611   58903 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 00:48:21.044674   58903 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:48:21.044767   58903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:48:21.044820   58903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:48:21.093693   58903 cri.go:89] found id: ""
	I0314 00:48:21.093786   58903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 00:48:21.109167   58903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:48:21.124702   58903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:48:21.139438   58903 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:48:21.139464   58903 kubeadm.go:156] found existing configuration files:
	
	I0314 00:48:21.139521   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:48:21.153943   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:48:21.154006   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:48:21.170024   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:48:21.181625   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:48:21.181682   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:48:21.204107   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:48:21.224490   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:48:21.224566   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:48:21.236483   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:48:21.247814   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:48:21.247871   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:48:21.260760   58903 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 00:48:21.670922   58903 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 00:50:20.156835   58903 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 00:50:20.156938   58903 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 00:50:20.159217   58903 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 00:50:20.159417   58903 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:50:20.159616   58903 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:50:20.159806   58903 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:50:20.159965   58903 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:50:20.160118   58903 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:50:20.162234   58903 out.go:204]   - Generating certificates and keys ...
	I0314 00:50:20.162328   58903 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 00:50:20.162410   58903 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 00:50:20.162508   58903 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 00:50:20.162585   58903 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 00:50:20.162680   58903 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 00:50:20.162755   58903 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 00:50:20.162840   58903 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 00:50:20.163031   58903 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0314 00:50:20.163098   58903 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 00:50:20.163207   58903 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0314 00:50:20.163288   58903 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 00:50:20.163363   58903 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 00:50:20.163421   58903 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 00:50:20.163497   58903 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 00:50:20.163568   58903 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 00:50:20.163636   58903 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 00:50:20.163693   58903 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 00:50:20.163742   58903 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 00:50:20.163827   58903 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 00:50:20.163911   58903 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 00:50:20.163946   58903 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 00:50:20.164044   58903 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 00:50:20.165776   58903 out.go:204]   - Booting up control plane ...
	I0314 00:50:20.165881   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 00:50:20.165968   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 00:50:20.166056   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 00:50:20.166158   58903 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 00:50:20.166352   58903 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 00:50:20.166442   58903 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:50:20.166541   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:50:20.166741   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:50:20.166844   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:50:20.167077   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:50:20.167170   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:50:20.167423   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:50:20.167540   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:50:20.167768   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:50:20.167849   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:50:20.168054   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:50:20.168070   58903 kubeadm.go:309] 
	I0314 00:50:20.168117   58903 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 00:50:20.168166   58903 kubeadm.go:309] 		timed out waiting for the condition
	I0314 00:50:20.168177   58903 kubeadm.go:309] 
	I0314 00:50:20.168217   58903 kubeadm.go:309] 	This error is likely caused by:
	I0314 00:50:20.168260   58903 kubeadm.go:309] 		- The kubelet is not running
	I0314 00:50:20.168367   58903 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 00:50:20.168384   58903 kubeadm.go:309] 
	I0314 00:50:20.168495   58903 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 00:50:20.168537   58903 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 00:50:20.168575   58903 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 00:50:20.168584   58903 kubeadm.go:309] 
	I0314 00:50:20.168723   58903 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 00:50:20.168821   58903 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 00:50:20.168833   58903 kubeadm.go:309] 
	I0314 00:50:20.168931   58903 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 00:50:20.169040   58903 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 00:50:20.169138   58903 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 00:50:20.169204   58903 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0314 00:50:20.169315   58903 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-004791] and IPs [192.168.72.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 00:50:20.169357   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 00:50:20.169575   58903 kubeadm.go:309] 
	I0314 00:50:21.808068   58903 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638682517s)
	I0314 00:50:21.808159   58903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:50:21.824449   58903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:50:21.836213   58903 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:50:21.836240   58903 kubeadm.go:156] found existing configuration files:
	
	I0314 00:50:21.836290   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:50:21.847447   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:50:21.847512   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:50:21.858431   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:50:21.868947   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:50:21.869015   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:50:21.879926   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:50:21.890379   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:50:21.890446   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:50:21.901038   58903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:50:21.911200   58903 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:50:21.911270   58903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:50:21.921758   58903 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 00:50:22.008632   58903 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 00:50:22.008684   58903 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:50:22.172439   58903 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:50:22.172615   58903 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:50:22.172780   58903 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:50:22.401834   58903 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:50:22.404132   58903 out.go:204]   - Generating certificates and keys ...
	I0314 00:50:22.404235   58903 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 00:50:22.404310   58903 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 00:50:22.404440   58903 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 00:50:22.404561   58903 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 00:50:22.404673   58903 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 00:50:22.404768   58903 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 00:50:22.404859   58903 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 00:50:22.404971   58903 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 00:50:22.405078   58903 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 00:50:22.405218   58903 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 00:50:22.405281   58903 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 00:50:22.405355   58903 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 00:50:22.545948   58903 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 00:50:22.768404   58903 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 00:50:22.894564   58903 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 00:50:23.038348   58903 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 00:50:23.057107   58903 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 00:50:23.058810   58903 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 00:50:23.058875   58903 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 00:50:23.230940   58903 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 00:50:23.233768   58903 out.go:204]   - Booting up control plane ...
	I0314 00:50:23.233899   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 00:50:23.241326   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 00:50:23.244469   58903 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 00:50:23.251435   58903 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 00:50:23.258729   58903 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 00:51:03.260863   58903 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 00:51:03.261051   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:51:03.261283   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:51:08.261988   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:51:08.262269   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:51:18.263044   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:51:18.263302   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:51:38.264334   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:51:38.264509   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:52:18.264808   58903 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 00:52:18.265051   58903 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 00:52:18.265077   58903 kubeadm.go:309] 
	I0314 00:52:18.265126   58903 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 00:52:18.265201   58903 kubeadm.go:309] 		timed out waiting for the condition
	I0314 00:52:18.265227   58903 kubeadm.go:309] 
	I0314 00:52:18.265276   58903 kubeadm.go:309] 	This error is likely caused by:
	I0314 00:52:18.265322   58903 kubeadm.go:309] 		- The kubelet is not running
	I0314 00:52:18.265462   58903 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 00:52:18.265480   58903 kubeadm.go:309] 
	I0314 00:52:18.265642   58903 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 00:52:18.265689   58903 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 00:52:18.265745   58903 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 00:52:18.265765   58903 kubeadm.go:309] 
	I0314 00:52:18.265860   58903 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 00:52:18.265929   58903 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 00:52:18.265935   58903 kubeadm.go:309] 
	I0314 00:52:18.266025   58903 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 00:52:18.266111   58903 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 00:52:18.266178   58903 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 00:52:18.266241   58903 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 00:52:18.266250   58903 kubeadm.go:309] 
	I0314 00:52:18.267038   58903 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 00:52:18.267126   58903 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 00:52:18.267222   58903 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 00:52:18.267293   58903 kubeadm.go:393] duration metric: took 3m57.222624384s to StartCluster
	I0314 00:52:18.267334   58903 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:52:18.267390   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:52:18.315845   58903 cri.go:89] found id: ""
	I0314 00:52:18.315869   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.315881   58903 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:52:18.315889   58903 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:52:18.315951   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:52:18.353777   58903 cri.go:89] found id: ""
	I0314 00:52:18.353814   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.353825   58903 logs.go:278] No container was found matching "etcd"
	I0314 00:52:18.353835   58903 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:52:18.353899   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:52:18.392662   58903 cri.go:89] found id: ""
	I0314 00:52:18.392690   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.392698   58903 logs.go:278] No container was found matching "coredns"
	I0314 00:52:18.392703   58903 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:52:18.392776   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:52:18.430051   58903 cri.go:89] found id: ""
	I0314 00:52:18.430081   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.430090   58903 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:52:18.430095   58903 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:52:18.430149   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:52:18.465923   58903 cri.go:89] found id: ""
	I0314 00:52:18.465955   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.465966   58903 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:52:18.465974   58903 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:52:18.466039   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:52:18.512867   58903 cri.go:89] found id: ""
	I0314 00:52:18.512899   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.512909   58903 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:52:18.512916   58903 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:52:18.512978   58903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:52:18.550744   58903 cri.go:89] found id: ""
	I0314 00:52:18.550777   58903 logs.go:276] 0 containers: []
	W0314 00:52:18.550785   58903 logs.go:278] No container was found matching "kindnet"
	I0314 00:52:18.550794   58903 logs.go:123] Gathering logs for kubelet ...
	I0314 00:52:18.550805   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:52:18.603130   58903 logs.go:123] Gathering logs for dmesg ...
	I0314 00:52:18.603163   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:52:18.617956   58903 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:52:18.617981   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:52:18.725493   58903 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:52:18.725519   58903 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:52:18.725536   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:52:18.823044   58903 logs.go:123] Gathering logs for container status ...
	I0314 00:52:18.823082   58903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 00:52:18.866256   58903 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 00:52:18.866302   58903 out.go:239] * 
	* 
	W0314 00:52:18.866366   58903 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 00:52:18.866393   58903 out.go:239] * 
	* 
	W0314 00:52:18.867229   58903 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:52:18.870192   58903 out.go:177] 
	W0314 00:52:18.871503   58903 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 00:52:18.871568   58903 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 00:52:18.871620   58903 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 00:52:18.873104   58903 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 6 (233.665244ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:19.147746   65294 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-004791" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (306.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-164135 --alsologtostderr -v=3
E0314 00:50:16.344429   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:50:16.484797   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:50:36.825061   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-164135 --alsologtostderr -v=3: exit status 82 (2m0.546317342s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-164135"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:50:12.939417   64681 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:50:12.939563   64681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:50:12.939573   64681 out.go:304] Setting ErrFile to fd 2...
	I0314 00:50:12.939578   64681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:50:12.939743   64681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:50:12.939995   64681 out.go:298] Setting JSON to false
	I0314 00:50:12.940068   64681 mustload.go:65] Loading cluster: embed-certs-164135
	I0314 00:50:12.940375   64681 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:50:12.940431   64681 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:50:12.940656   64681 mustload.go:65] Loading cluster: embed-certs-164135
	I0314 00:50:12.940797   64681 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:50:12.940833   64681 stop.go:39] StopHost: embed-certs-164135
	I0314 00:50:12.941194   64681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:50:12.941230   64681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:50:12.956451   64681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44809
	I0314 00:50:12.956977   64681 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:50:12.957523   64681 main.go:141] libmachine: Using API Version  1
	I0314 00:50:12.957539   64681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:50:12.957864   64681 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:50:12.960482   64681 out.go:177] * Stopping node "embed-certs-164135"  ...
	I0314 00:50:12.961791   64681 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 00:50:12.961840   64681 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:50:12.962086   64681 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 00:50:12.962139   64681 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:50:12.965682   64681 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:50:12.966135   64681 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:49:07 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:50:12.966172   64681 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:50:12.966452   64681 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:50:12.966656   64681 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:50:12.966832   64681 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:50:12.966977   64681 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:50:13.074832   64681 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 00:50:13.130554   64681 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 00:50:13.208852   64681 main.go:141] libmachine: Stopping "embed-certs-164135"...
	I0314 00:50:13.208902   64681 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:50:13.210638   64681 main.go:141] libmachine: (embed-certs-164135) Calling .Stop
	I0314 00:50:13.213972   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 0/120
	I0314 00:50:14.215387   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 1/120
	I0314 00:50:15.217352   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 2/120
	I0314 00:50:16.218808   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 3/120
	I0314 00:50:17.220607   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 4/120
	I0314 00:50:18.222748   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 5/120
	I0314 00:50:19.225345   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 6/120
	I0314 00:50:20.227078   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 7/120
	I0314 00:50:21.229456   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 8/120
	I0314 00:50:22.231376   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 9/120
	I0314 00:50:23.233556   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 10/120
	I0314 00:50:24.235752   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 11/120
	I0314 00:50:25.237460   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 12/120
	I0314 00:50:26.239150   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 13/120
	I0314 00:50:27.240875   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 14/120
	I0314 00:50:28.242912   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 15/120
	I0314 00:50:29.245298   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 16/120
	I0314 00:50:30.246588   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 17/120
	I0314 00:50:31.248254   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 18/120
	I0314 00:50:32.249797   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 19/120
	I0314 00:50:33.252356   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 20/120
	I0314 00:50:34.253768   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 21/120
	I0314 00:50:35.255151   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 22/120
	I0314 00:50:36.257689   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 23/120
	I0314 00:50:37.259420   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 24/120
	I0314 00:50:38.260723   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 25/120
	I0314 00:50:39.262436   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 26/120
	I0314 00:50:40.264301   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 27/120
	I0314 00:50:41.265735   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 28/120
	I0314 00:50:42.267804   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 29/120
	I0314 00:50:43.269844   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 30/120
	I0314 00:50:44.271474   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 31/120
	I0314 00:50:45.273644   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 32/120
	I0314 00:50:46.275188   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 33/120
	I0314 00:50:47.277228   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 34/120
	I0314 00:50:48.279427   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 35/120
	I0314 00:50:49.281216   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 36/120
	I0314 00:50:50.282908   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 37/120
	I0314 00:50:51.284346   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 38/120
	I0314 00:50:52.285796   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 39/120
	I0314 00:50:53.288016   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 40/120
	I0314 00:50:54.289674   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 41/120
	I0314 00:50:55.291086   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 42/120
	I0314 00:50:56.293313   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 43/120
	I0314 00:50:57.294880   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 44/120
	I0314 00:50:58.296844   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 45/120
	I0314 00:50:59.298073   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 46/120
	I0314 00:51:00.299456   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 47/120
	I0314 00:51:01.300773   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 48/120
	I0314 00:51:02.302347   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 49/120
	I0314 00:51:03.304959   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 50/120
	I0314 00:51:04.306332   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 51/120
	I0314 00:51:05.307633   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 52/120
	I0314 00:51:06.308913   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 53/120
	I0314 00:51:07.310141   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 54/120
	I0314 00:51:08.311802   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 55/120
	I0314 00:51:09.313602   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 56/120
	I0314 00:51:10.314866   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 57/120
	I0314 00:51:11.316045   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 58/120
	I0314 00:51:12.317277   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 59/120
	I0314 00:51:13.319653   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 60/120
	I0314 00:51:14.320857   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 61/120
	I0314 00:51:15.322269   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 62/120
	I0314 00:51:16.323514   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 63/120
	I0314 00:51:17.325942   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 64/120
	I0314 00:51:18.328606   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 65/120
	I0314 00:51:19.329988   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 66/120
	I0314 00:51:20.331305   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 67/120
	I0314 00:51:21.332860   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 68/120
	I0314 00:51:22.334300   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 69/120
	I0314 00:51:23.336598   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 70/120
	I0314 00:51:24.338247   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 71/120
	I0314 00:51:25.339648   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 72/120
	I0314 00:51:26.340983   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 73/120
	I0314 00:51:27.342347   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 74/120
	I0314 00:51:28.344205   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 75/120
	I0314 00:51:29.345504   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 76/120
	I0314 00:51:30.346968   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 77/120
	I0314 00:51:31.348373   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 78/120
	I0314 00:51:32.349820   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 79/120
	I0314 00:51:33.352163   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 80/120
	I0314 00:51:34.353747   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 81/120
	I0314 00:51:35.355193   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 82/120
	I0314 00:51:36.356927   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 83/120
	I0314 00:51:37.358455   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 84/120
	I0314 00:51:38.360376   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 85/120
	I0314 00:51:39.361833   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 86/120
	I0314 00:51:40.363566   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 87/120
	I0314 00:51:41.365363   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 88/120
	I0314 00:51:42.367009   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 89/120
	I0314 00:51:43.368867   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 90/120
	I0314 00:51:44.370582   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 91/120
	I0314 00:51:45.371863   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 92/120
	I0314 00:51:46.373337   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 93/120
	I0314 00:51:47.374654   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 94/120
	I0314 00:51:48.376510   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 95/120
	I0314 00:51:49.378082   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 96/120
	I0314 00:51:50.379426   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 97/120
	I0314 00:51:51.380927   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 98/120
	I0314 00:51:52.382304   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 99/120
	I0314 00:51:53.384506   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 100/120
	I0314 00:51:54.386102   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 101/120
	I0314 00:51:55.387490   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 102/120
	I0314 00:51:56.389245   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 103/120
	I0314 00:51:57.390824   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 104/120
	I0314 00:51:58.393034   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 105/120
	I0314 00:51:59.394437   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 106/120
	I0314 00:52:00.395976   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 107/120
	I0314 00:52:01.397440   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 108/120
	I0314 00:52:02.399230   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 109/120
	I0314 00:52:03.401457   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 110/120
	I0314 00:52:04.403126   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 111/120
	I0314 00:52:05.404693   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 112/120
	I0314 00:52:06.406158   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 113/120
	I0314 00:52:07.407663   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 114/120
	I0314 00:52:08.409736   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 115/120
	I0314 00:52:09.411256   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 116/120
	I0314 00:52:10.412745   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 117/120
	I0314 00:52:11.414677   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 118/120
	I0314 00:52:12.416139   64681 main.go:141] libmachine: (embed-certs-164135) Waiting for machine to stop 119/120
	I0314 00:52:13.417543   64681 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 00:52:13.417612   64681 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 00:52:13.419571   64681 out.go:177] 
	W0314 00:52:13.421122   64681 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 00:52:13.421143   64681 out.go:239] * 
	* 
	W0314 00:52:13.423676   64681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:52:13.425146   64681 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-164135 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135: exit status 3 (18.469019285s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:31.895094   65251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0314 00:52:31.895112   65251 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-164135" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-585806 --alsologtostderr -v=3
E0314 00:50:57.445903   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-585806 --alsologtostderr -v=3: exit status 82 (2m0.511734994s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-585806"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:50:57.186159   64965 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:50:57.186360   64965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:50:57.186369   64965 out.go:304] Setting ErrFile to fd 2...
	I0314 00:50:57.186373   64965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:50:57.186582   64965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:50:57.186891   64965 out.go:298] Setting JSON to false
	I0314 00:50:57.186967   64965 mustload.go:65] Loading cluster: no-preload-585806
	I0314 00:50:57.187266   64965 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:50:57.187326   64965 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:50:57.187492   64965 mustload.go:65] Loading cluster: no-preload-585806
	I0314 00:50:57.187587   64965 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:50:57.187618   64965 stop.go:39] StopHost: no-preload-585806
	I0314 00:50:57.188063   64965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:50:57.188115   64965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:50:57.204987   64965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44389
	I0314 00:50:57.205512   64965 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:50:57.206112   64965 main.go:141] libmachine: Using API Version  1
	I0314 00:50:57.206136   64965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:50:57.206478   64965 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:50:57.208888   64965 out.go:177] * Stopping node "no-preload-585806"  ...
	I0314 00:50:57.210148   64965 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 00:50:57.210186   64965 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:50:57.210417   64965 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 00:50:57.210449   64965 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:50:57.213465   64965 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:50:57.213928   64965 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:50:57.213973   64965 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:50:57.214097   64965 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:50:57.214276   64965 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:50:57.214446   64965 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:50:57.214602   64965 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:50:57.311531   64965 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 00:50:57.370291   64965 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 00:50:57.427997   64965 main.go:141] libmachine: Stopping "no-preload-585806"...
	I0314 00:50:57.428026   64965 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:50:57.429656   64965 main.go:141] libmachine: (no-preload-585806) Calling .Stop
	I0314 00:50:57.433583   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 0/120
	I0314 00:50:58.435617   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 1/120
	I0314 00:50:59.437286   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 2/120
	I0314 00:51:00.438919   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 3/120
	I0314 00:51:01.440499   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 4/120
	I0314 00:51:02.442923   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 5/120
	I0314 00:51:03.444457   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 6/120
	I0314 00:51:04.446177   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 7/120
	I0314 00:51:05.447606   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 8/120
	I0314 00:51:06.449936   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 9/120
	I0314 00:51:07.452285   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 10/120
	I0314 00:51:08.454016   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 11/120
	I0314 00:51:09.455239   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 12/120
	I0314 00:51:10.456512   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 13/120
	I0314 00:51:11.457878   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 14/120
	I0314 00:51:12.460115   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 15/120
	I0314 00:51:13.461509   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 16/120
	I0314 00:51:14.462897   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 17/120
	I0314 00:51:15.464390   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 18/120
	I0314 00:51:16.465709   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 19/120
	I0314 00:51:17.468091   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 20/120
	I0314 00:51:18.469519   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 21/120
	I0314 00:51:19.471077   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 22/120
	I0314 00:51:20.472589   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 23/120
	I0314 00:51:21.474514   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 24/120
	I0314 00:51:22.476724   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 25/120
	I0314 00:51:23.478095   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 26/120
	I0314 00:51:24.479381   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 27/120
	I0314 00:51:25.480662   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 28/120
	I0314 00:51:26.482130   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 29/120
	I0314 00:51:27.484384   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 30/120
	I0314 00:51:28.485625   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 31/120
	I0314 00:51:29.486850   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 32/120
	I0314 00:51:30.488130   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 33/120
	I0314 00:51:31.489541   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 34/120
	I0314 00:51:32.491928   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 35/120
	I0314 00:51:33.493592   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 36/120
	I0314 00:51:34.495324   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 37/120
	I0314 00:51:35.496875   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 38/120
	I0314 00:51:36.498384   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 39/120
	I0314 00:51:37.499885   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 40/120
	I0314 00:51:38.501432   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 41/120
	I0314 00:51:39.503052   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 42/120
	I0314 00:51:40.504462   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 43/120
	I0314 00:51:41.506145   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 44/120
	I0314 00:51:42.508529   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 45/120
	I0314 00:51:43.510485   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 46/120
	I0314 00:51:44.512286   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 47/120
	I0314 00:51:45.514409   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 48/120
	I0314 00:51:46.516035   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 49/120
	I0314 00:51:47.518547   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 50/120
	I0314 00:51:48.520373   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 51/120
	I0314 00:51:49.521904   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 52/120
	I0314 00:51:50.523275   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 53/120
	I0314 00:51:51.524596   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 54/120
	I0314 00:51:52.526556   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 55/120
	I0314 00:51:53.527863   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 56/120
	I0314 00:51:54.529296   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 57/120
	I0314 00:51:55.530944   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 58/120
	I0314 00:51:56.533251   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 59/120
	I0314 00:51:57.535649   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 60/120
	I0314 00:51:58.537141   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 61/120
	I0314 00:51:59.538528   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 62/120
	I0314 00:52:00.539853   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 63/120
	I0314 00:52:01.541477   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 64/120
	I0314 00:52:02.543971   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 65/120
	I0314 00:52:03.545388   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 66/120
	I0314 00:52:04.546852   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 67/120
	I0314 00:52:05.548992   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 68/120
	I0314 00:52:06.550383   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 69/120
	I0314 00:52:07.552887   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 70/120
	I0314 00:52:08.554364   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 71/120
	I0314 00:52:09.555898   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 72/120
	I0314 00:52:10.557477   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 73/120
	I0314 00:52:11.559039   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 74/120
	I0314 00:52:12.560900   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 75/120
	I0314 00:52:13.562270   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 76/120
	I0314 00:52:14.563846   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 77/120
	I0314 00:52:15.565188   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 78/120
	I0314 00:52:16.566667   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 79/120
	I0314 00:52:17.568978   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 80/120
	I0314 00:52:18.570573   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 81/120
	I0314 00:52:19.571736   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 82/120
	I0314 00:52:20.573137   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 83/120
	I0314 00:52:21.574803   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 84/120
	I0314 00:52:22.576814   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 85/120
	I0314 00:52:23.578219   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 86/120
	I0314 00:52:24.579631   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 87/120
	I0314 00:52:25.581516   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 88/120
	I0314 00:52:26.582942   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 89/120
	I0314 00:52:27.585388   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 90/120
	I0314 00:52:28.586758   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 91/120
	I0314 00:52:29.588171   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 92/120
	I0314 00:52:30.589977   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 93/120
	I0314 00:52:31.591457   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 94/120
	I0314 00:52:32.593491   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 95/120
	I0314 00:52:33.594998   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 96/120
	I0314 00:52:34.596614   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 97/120
	I0314 00:52:35.598156   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 98/120
	I0314 00:52:36.599742   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 99/120
	I0314 00:52:37.602168   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 100/120
	I0314 00:52:38.603505   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 101/120
	I0314 00:52:39.604987   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 102/120
	I0314 00:52:40.606440   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 103/120
	I0314 00:52:41.607918   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 104/120
	I0314 00:52:42.610100   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 105/120
	I0314 00:52:43.611580   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 106/120
	I0314 00:52:44.612994   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 107/120
	I0314 00:52:45.614586   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 108/120
	I0314 00:52:46.616119   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 109/120
	I0314 00:52:47.618567   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 110/120
	I0314 00:52:48.619761   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 111/120
	I0314 00:52:49.621684   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 112/120
	I0314 00:52:50.623084   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 113/120
	I0314 00:52:51.624810   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 114/120
	I0314 00:52:52.627007   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 115/120
	I0314 00:52:53.628264   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 116/120
	I0314 00:52:54.629682   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 117/120
	I0314 00:52:55.631213   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 118/120
	I0314 00:52:56.632701   64965 main.go:141] libmachine: (no-preload-585806) Waiting for machine to stop 119/120
	I0314 00:52:57.633422   64965 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 00:52:57.633504   64965 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 00:52:57.635364   64965 out.go:177] 
	W0314 00:52:57.636720   64965 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 00:52:57.636736   64965 out.go:239] * 
	* 
	W0314 00:52:57.639302   64965 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:52:57.640655   64965 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-585806 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
E0314 00:53:05.165919   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:53:07.176878   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:53:15.675874   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.681117   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.691385   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.711653   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.752056   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.832416   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:15.993015   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806: exit status 3 (18.540875046s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:16.183072   65638 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	E0314 00:53:16.183095   65638 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-585806" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-652215 --alsologtostderr -v=3
E0314 00:51:21.986183   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:32.226906   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:43.241011   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.246303   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.256569   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.276923   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.317195   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.398205   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.558838   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:43.879498   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:44.520634   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:45.801291   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:48.362268   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:51:52.708086   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:53.483146   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:52:03.723870   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-652215 --alsologtostderr -v=3: exit status 82 (2m0.552613402s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-652215"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:51:18.614529   65100 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:51:18.614813   65100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:51:18.614823   65100 out.go:304] Setting ErrFile to fd 2...
	I0314 00:51:18.614827   65100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:51:18.615013   65100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:51:18.615251   65100 out.go:298] Setting JSON to false
	I0314 00:51:18.615323   65100 mustload.go:65] Loading cluster: default-k8s-diff-port-652215
	I0314 00:51:18.615628   65100 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:51:18.615688   65100 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:51:18.615847   65100 mustload.go:65] Loading cluster: default-k8s-diff-port-652215
	I0314 00:51:18.615943   65100 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:51:18.615974   65100 stop.go:39] StopHost: default-k8s-diff-port-652215
	I0314 00:51:18.616356   65100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:51:18.616392   65100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:51:18.632147   65100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0314 00:51:18.632665   65100 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:51:18.633294   65100 main.go:141] libmachine: Using API Version  1
	I0314 00:51:18.633318   65100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:51:18.633745   65100 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:51:18.636386   65100 out.go:177] * Stopping node "default-k8s-diff-port-652215"  ...
	I0314 00:51:18.637878   65100 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 00:51:18.637900   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:51:18.638193   65100 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 00:51:18.638218   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:51:18.641122   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:51:18.641561   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:49:44 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:51:18.641592   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:51:18.641784   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:51:18.641976   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:51:18.642132   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:51:18.642272   65100 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:51:18.776994   65100 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 00:51:18.844639   65100 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 00:51:18.900117   65100 main.go:141] libmachine: Stopping "default-k8s-diff-port-652215"...
	I0314 00:51:18.900145   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:51:18.901654   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Stop
	I0314 00:51:18.904900   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 0/120
	I0314 00:51:19.906305   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 1/120
	I0314 00:51:20.907982   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 2/120
	I0314 00:51:21.909447   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 3/120
	I0314 00:51:22.910974   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 4/120
	I0314 00:51:23.913085   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 5/120
	I0314 00:51:24.914579   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 6/120
	I0314 00:51:25.915862   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 7/120
	I0314 00:51:26.917348   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 8/120
	I0314 00:51:27.918956   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 9/120
	I0314 00:51:28.920435   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 10/120
	I0314 00:51:29.921916   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 11/120
	I0314 00:51:30.923399   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 12/120
	I0314 00:51:31.925104   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 13/120
	I0314 00:51:32.927338   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 14/120
	I0314 00:51:33.929614   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 15/120
	I0314 00:51:34.931640   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 16/120
	I0314 00:51:35.932975   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 17/120
	I0314 00:51:36.934559   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 18/120
	I0314 00:51:37.936224   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 19/120
	I0314 00:51:38.937565   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 20/120
	I0314 00:51:39.938954   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 21/120
	I0314 00:51:40.940383   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 22/120
	I0314 00:51:41.941908   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 23/120
	I0314 00:51:42.943431   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 24/120
	I0314 00:51:43.945283   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 25/120
	I0314 00:51:44.946799   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 26/120
	I0314 00:51:45.948275   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 27/120
	I0314 00:51:46.949894   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 28/120
	I0314 00:51:47.951396   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 29/120
	I0314 00:51:48.952596   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 30/120
	I0314 00:51:49.954016   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 31/120
	I0314 00:51:50.955491   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 32/120
	I0314 00:51:51.956845   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 33/120
	I0314 00:51:52.958147   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 34/120
	I0314 00:51:53.960036   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 35/120
	I0314 00:51:54.961590   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 36/120
	I0314 00:51:55.963195   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 37/120
	I0314 00:51:56.965367   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 38/120
	I0314 00:51:57.966756   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 39/120
	I0314 00:51:58.969184   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 40/120
	I0314 00:51:59.970590   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 41/120
	I0314 00:52:00.972070   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 42/120
	I0314 00:52:01.973393   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 43/120
	I0314 00:52:02.974958   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 44/120
	I0314 00:52:03.977149   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 45/120
	I0314 00:52:04.978645   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 46/120
	I0314 00:52:05.980293   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 47/120
	I0314 00:52:06.981746   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 48/120
	I0314 00:52:07.983158   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 49/120
	I0314 00:52:08.985531   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 50/120
	I0314 00:52:09.987096   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 51/120
	I0314 00:52:10.988467   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 52/120
	I0314 00:52:11.989985   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 53/120
	I0314 00:52:12.991524   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 54/120
	I0314 00:52:13.993628   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 55/120
	I0314 00:52:14.995200   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 56/120
	I0314 00:52:15.996392   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 57/120
	I0314 00:52:16.997955   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 58/120
	I0314 00:52:17.999337   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 59/120
	I0314 00:52:19.001243   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 60/120
	I0314 00:52:20.002744   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 61/120
	I0314 00:52:21.004381   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 62/120
	I0314 00:52:22.006593   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 63/120
	I0314 00:52:23.007969   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 64/120
	I0314 00:52:24.010382   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 65/120
	I0314 00:52:25.011826   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 66/120
	I0314 00:52:26.013408   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 67/120
	I0314 00:52:27.014946   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 68/120
	I0314 00:52:28.016404   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 69/120
	I0314 00:52:29.019019   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 70/120
	I0314 00:52:30.020692   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 71/120
	I0314 00:52:31.022658   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 72/120
	I0314 00:52:32.023992   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 73/120
	I0314 00:52:33.025462   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 74/120
	I0314 00:52:34.027133   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 75/120
	I0314 00:52:35.028638   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 76/120
	I0314 00:52:36.030009   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 77/120
	I0314 00:52:37.031783   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 78/120
	I0314 00:52:38.033269   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 79/120
	I0314 00:52:39.035874   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 80/120
	I0314 00:52:40.037433   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 81/120
	I0314 00:52:41.038711   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 82/120
	I0314 00:52:42.040146   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 83/120
	I0314 00:52:43.041722   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 84/120
	I0314 00:52:44.043841   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 85/120
	I0314 00:52:45.045313   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 86/120
	I0314 00:52:46.047334   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 87/120
	I0314 00:52:47.048597   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 88/120
	I0314 00:52:48.050021   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 89/120
	I0314 00:52:49.052321   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 90/120
	I0314 00:52:50.053740   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 91/120
	I0314 00:52:51.054982   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 92/120
	I0314 00:52:52.057499   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 93/120
	I0314 00:52:53.058851   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 94/120
	I0314 00:52:54.060977   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 95/120
	I0314 00:52:55.062171   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 96/120
	I0314 00:52:56.063708   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 97/120
	I0314 00:52:57.065006   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 98/120
	I0314 00:52:58.066535   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 99/120
	I0314 00:52:59.068837   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 100/120
	I0314 00:53:00.070317   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 101/120
	I0314 00:53:01.071707   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 102/120
	I0314 00:53:02.073341   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 103/120
	I0314 00:53:03.074737   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 104/120
	I0314 00:53:04.076979   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 105/120
	I0314 00:53:05.078486   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 106/120
	I0314 00:53:06.079925   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 107/120
	I0314 00:53:07.081445   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 108/120
	I0314 00:53:08.082678   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 109/120
	I0314 00:53:09.085057   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 110/120
	I0314 00:53:10.086526   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 111/120
	I0314 00:53:11.087963   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 112/120
	I0314 00:53:12.089650   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 113/120
	I0314 00:53:13.091857   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 114/120
	I0314 00:53:14.094104   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 115/120
	I0314 00:53:15.095827   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 116/120
	I0314 00:53:16.097280   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 117/120
	I0314 00:53:17.098991   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 118/120
	I0314 00:53:18.100316   65100 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for machine to stop 119/120
	I0314 00:53:19.101560   65100 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 00:53:19.101620   65100 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 00:53:19.103898   65100 out.go:177] 
	W0314 00:53:19.105674   65100 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 00:53:19.105689   65100 out.go:239] * 
	* 
	W0314 00:53:19.108256   65100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 00:53:19.109909   65100 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-652215 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215: exit status 3 (18.575273556s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:37.687064   65762 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0314 00:53:37.687086   65762 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-652215" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-004791 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-004791 create -f testdata/busybox.yaml: exit status 1 (44.415696ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-004791" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-004791 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
E0314 00:52:19.366412   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 6 (230.611012ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:19.422944   65332 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-004791" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 6 (230.535027ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:19.655819   65362 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-004791" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-004791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0314 00:52:24.204953   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-004791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.586600695s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-004791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-004791 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-004791 describe deploy/metrics-server -n kube-system: exit status 1 (42.916087ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-004791" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-004791 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 6 (226.088654ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:56.511245   66088 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-004791" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
E0314 00:52:33.668292   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135: exit status 3 (3.16774456s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:35.063046   65457 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0314 00:52:35.063067   65457 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-164135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0314 00:52:39.706333   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-164135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153958256s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-164135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135: exit status 3 (3.06251871s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:52:44.279290   65527 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0314 00:52:44.279313   65527 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-164135" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
E0314 00:53:16.313798   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:16.954883   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:18.235759   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806: exit status 3 (3.168093717s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:19.351089   65732 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	E0314 00:53:19.351107   65732 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-585806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0314 00:53:20.796496   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-585806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152549426s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-585806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
E0314 00:53:25.917698   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:27.657066   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806: exit status 3 (3.062895102s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:28.567093   65823 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	E0314 00:53:28.567114   65823 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-585806" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215: exit status 3 (3.167927102s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:40.855201   65922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0314 00:53:40.855226   65922 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-652215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-652215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153418801s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-652215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215: exit status 3 (3.062344795s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:50.071217   65991 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0314 00:53:50.071240   65991 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-652215" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (754.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0314 00:54:03.273554   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:05.834253   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:08.618246   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:54:10.954885   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:21.195513   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:27.086594   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:54:35.522174   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:54:37.600046   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:54:41.675892   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:44.448806   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 00:54:55.862946   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:54:59.383170   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:55:03.207376   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:55:22.636565   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:55:23.547110   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:55:30.538937   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:55:59.521229   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:56:11.745382   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:56:39.429604   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:56:43.240473   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:56:44.557643   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:57:10.927797   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
E0314 00:57:46.696430   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:58:14.379124   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:58:15.676020   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:58:36.336076   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:58:43.361756   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:59:00.714720   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:59:28.398075   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:59:35.522742   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:59:44.448535   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 00:59:55.862146   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 01:01:11.745268   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 01:01:43.240249   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m30.552799227s)

                                                
                                                
-- stdout --
	* [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	* 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	* 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-004791 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (251.340723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25: (1.617651349s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.454869596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378395454841031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b19c8f9f-b34f-47df-b442-f596cf7e5310 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.456313540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6776171b-81a6-433c-a9b2-6581305a5b45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.456363844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6776171b-81a6-433c-a9b2-6581305a5b45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.456396011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6776171b-81a6-433c-a9b2-6581305a5b45 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.494820580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e028962-4af4-4279-a92b-cb2cdd2ee609 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.494899425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e028962-4af4-4279-a92b-cb2cdd2ee609 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.496181256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95fa2db2-902e-4ec3-af13-f387ecff87af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.496762724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378395496723000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95fa2db2-902e-4ec3-af13-f387ecff87af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.497525753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e8b9e17-85c9-4cca-a30c-c78e57ffe9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.497594262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e8b9e17-85c9-4cca-a30c-c78e57ffe9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.497637600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2e8b9e17-85c9-4cca-a30c-c78e57ffe9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.532329310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87a6b676-ed0a-4121-a967-14ec05e96d9a name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.532408656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87a6b676-ed0a-4121-a967-14ec05e96d9a name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.534105584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aab8c6c9-840a-4e32-be3d-ca57fd8d7535 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.534546531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378395534516084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aab8c6c9-840a-4e32-be3d-ca57fd8d7535 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.535098402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=934d8c79-7dee-41b0-8af4-d8a6537df8f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.535173172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=934d8c79-7dee-41b0-8af4-d8a6537df8f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.535209603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=934d8c79-7dee-41b0-8af4-d8a6537df8f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.570916655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c65d94cc-6e9e-4fb0-8f4e-ad9d780f63fd name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.571073294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c65d94cc-6e9e-4fb0-8f4e-ad9d780f63fd name=/runtime.v1.RuntimeService/Version
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.572133449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de6485f1-78c3-43df-929e-4695087c4166 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.572548078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378395572521986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de6485f1-78c3-43df-929e-4695087c4166 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.573152489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b7be61-2397-4bf8-bfae-a48b00f53042 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.573225992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b7be61-2397-4bf8-bfae-a48b00f53042 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:06:35 old-k8s-version-004791 crio[647]: time="2024-03-14 01:06:35.573279501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16b7be61-2397-4bf8-bfae-a48b00f53042 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 00:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052991] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.890210] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.079753] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.316199] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062984] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075521] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.214616] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.150711] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.294146] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.930331] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.061685] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.999458] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +8.247240] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 01:02] systemd-fstab-generator[4935]: Ignoring "noauto" option for root device
	[Mar14 01:04] systemd-fstab-generator[5216]: Ignoring "noauto" option for root device
	[  +0.077634] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:06:35 up 8 min,  0 users,  load average: 0.41, 0.19, 0.11
	Linux old-k8s-version-004791 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d33ef0, 0x4f0ac20, 0xc000bd7c70, 0x1, 0xc0001000c0)
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024ad20, 0xc0001000c0)
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c98430, 0xc000c94be0)
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: goroutine 159 [runnable]:
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc000c75f80, 0xc00024ac40, 0xc000c773b0, 0xc000c96aa0, 0xc000c98598, 0xc000c96ab0, 0xc000c9c8a0)
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Mar 14 01:06:32 old-k8s-version-004791 kubelet[5392]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Mar 14 01:06:32 old-k8s-version-004791 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 14 01:06:32 old-k8s-version-004791 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 14 01:06:33 old-k8s-version-004791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 14 01:06:33 old-k8s-version-004791 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 01:06:33 old-k8s-version-004791 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 01:06:33 old-k8s-version-004791 kubelet[5459]: I0314 01:06:33.723381    5459 server.go:416] Version: v1.20.0
	Mar 14 01:06:33 old-k8s-version-004791 kubelet[5459]: I0314 01:06:33.723801    5459 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 01:06:33 old-k8s-version-004791 kubelet[5459]: I0314 01:06:33.726736    5459 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 01:06:33 old-k8s-version-004791 kubelet[5459]: W0314 01:06:33.728010    5459 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 14 01:06:33 old-k8s-version-004791 kubelet[5459]: I0314 01:06:33.728572    5459 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (250.427584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-004791" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (754.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-585806 -n no-preload-585806
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:11:34.737150831 +0000 UTC m=+6322.806912311
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-585806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-585806 logs -n 25: (2.164059797s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.298271145Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lptfk,Uid:597ce2ed-6ab6-418e-9720-9ae9d275cb33,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377895113583472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:58:07.188114122Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1dfd2648-2774-42e2-8674-f4f1b8cc2856,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1710377895003623074,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:58:07.188106736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5e68e46a62290fe81ba7cd256b82cfa68dbb9d8fd6caf57730bfcc9fdf9b476,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-7pzll,Uid:84952403-8cff-4fa3-b7ef-d98ab0edf7a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377893324663580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-7pzll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84952403-8cff-4fa3-b7ef-d98ab0edf7a8,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:58:07.1
88124625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&PodSandboxMetadata{Name:kube-proxy-wpdb9,Uid:013df8e8-ce80-4cff-937a-16742369c561,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377887511424571,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c561,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:58:07.188121267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:113f608a-28d1-4365-9898-dd6f37150317,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377887507394748,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-03-14T00:58:07.188125923Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-585806,Uid:2e8e3458d0fcc73b22639020e4dbe845,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377882746290097,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.115:8443,kubernetes.io/config.hash: 2e8e3458d0fcc73b22639020e4dbe845,kubernetes.io/config.seen: 2024-03-14T00:58:02.180095687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&PodSandboxMetadata{N
ame:etcd-no-preload-585806,Uid:f96a85171c835c0ee3580825ac290b83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377882741726882,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.115:2379,kubernetes.io/config.hash: f96a85171c835c0ee3580825ac290b83,kubernetes.io/config.seen: 2024-03-14T00:58:02.239973387Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-585806,Uid:e656d08e1c0674b0323bc28bbc43a651,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377882725425547,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e656d08e1c0674b0323bc28bbc43a651,kubernetes.io/config.seen: 2024-03-14T00:58:02.180093882Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-585806,Uid:108778185192fe3195fda362ff928a03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377882724411989,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe3195fda362ff928a03,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 108778185192fe3195fda362ff928a03,ku
bernetes.io/config.seen: 2024-03-14T00:58:02.180087310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5e67985d-bdde-4227-b7bb-b96c21d33925 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.299398497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48ab3a0a-c767-449d-b969-81a6794171c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.299492986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48ab3a0a-c767-449d-b969-81a6794171c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.299758923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-93
7a-16742369c561,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108
778185192fe3195fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations
:map[string]string{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kube
rnetes.container.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48ab3a0a-c767-449d-b969-81a6794171c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.317451245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e1eafe0-5890-4d73-91e2-68f9a82c5643 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.317521642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e1eafe0-5890-4d73-91e2-68f9a82c5643 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.319634049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a74ae13-f7a2-4c1f-b596-ae7742a759f7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.320115901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378696320091918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a74ae13-f7a2-4c1f-b596-ae7742a759f7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.320616781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f6c7246-9ebc-4d16-8096-668a760d6a9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.320703945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f6c7246-9ebc-4d16-8096-668a760d6a9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.320940923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f6c7246-9ebc-4d16-8096-668a760d6a9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.364996976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b85027c-8c70-49b0-9344-680cfe0845be name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.365081440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b85027c-8c70-49b0-9344-680cfe0845be name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.366126138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae4aefca-0bf8-428a-9fed-c64effd1bab6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.366528828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378696366505821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae4aefca-0bf8-428a-9fed-c64effd1bab6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.369977862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a8b3eee-9345-492c-b9c8-25bc61a53fb2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.371221940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a8b3eee-9345-492c-b9c8-25bc61a53fb2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.371785987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a8b3eee-9345-492c-b9c8-25bc61a53fb2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.418385550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=504cf288-87f6-48c9-9f09-7df89c54d769 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.418488957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=504cf288-87f6-48c9-9f09-7df89c54d769 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.420022004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e667c254-d2ec-4255-95e3-fe4decb0d1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.420620896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378696420591473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e667c254-d2ec-4255-95e3-fe4decb0d1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.421760280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21c82adc-0f9d-49c5-9fc2-ce9ee5af9e2c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.421832523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21c82adc-0f9d-49c5-9fc2-ce9ee5af9e2c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:36 no-preload-585806 crio[696]: time="2024-03-14 01:11:36.422129980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21c82adc-0f9d-49c5-9fc2-ce9ee5af9e2c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ba8fd6893aa1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   3ec3727497680       storage-provisioner
	3c016d74dfbbf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   77a92686f45fb       busybox
	7a23310363170       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   1135d8ed633c0       coredns-76f75df574-lptfk
	3d431baedcd8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   3ec3727497680       storage-provisioner
	3c9a4136bfd32       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   c6c6fd086a01a       kube-proxy-wpdb9
	d05f2a8d7b1aa       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   47abf83a24220       etcd-no-preload-585806
	396e0c2ab791a       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   1b6c4eb38b6dd       kube-controller-manager-no-preload-585806
	eaf7cd9d2f3f8       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   de219d83395e9       kube-scheduler-no-preload-585806
	310169fe474c4       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   e4e910e75784f       kube-apiserver-no-preload-585806
	
	
	==> coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37552 - 35520 "HINFO IN 2493074614276229977.5371900738671167779. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008631751s
	
	
	==> describe nodes <==
	Name:               no-preload-585806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-585806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=no-preload-585806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_50_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-585806
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:08:50 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:08:50 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:08:50 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:08:50 +0000   Thu, 14 Mar 2024 00:58:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    no-preload-585806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 90a091811dad4078bf279872b150db37
	  System UUID:                90a09181-1dad-4078-bf27-9872b150db37
	  Boot ID:                    7b4921fb-3e23-45df-a6de-d03fc0ff22c5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-lptfk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-no-preload-585806                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-585806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-585806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-wpdb9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-no-preload-585806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-7pzll              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-585806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-585806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-585806 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node no-preload-585806 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-585806 event: Registered Node no-preload-585806 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-585806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-585806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-585806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-585806 event: Registered Node no-preload-585806 in Controller
	
	
	==> dmesg <==
	[Mar14 00:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052411] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041611] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.522446] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.859067] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.654073] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.669454] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.063148] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067479] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.199936] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.154049] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.290727] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[ +16.789590] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.065797] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 00:58] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +5.651706] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.513730] systemd-fstab-generator[1929]: Ignoring "noauto" option for root device
	[  +1.272423] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.900161] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] <==
	{"level":"info","ts":"2024-03-14T00:58:09.106081Z","caller":"traceutil/trace.go:171","msg":"trace[708046047] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"566.488596ms","start":"2024-03-14T00:58:08.539571Z","end":"2024-03-14T00:58:09.10606Z","steps":["trace[708046047] 'read index received'  (duration: 566.265922ms)","trace[708046047] 'applied index is now lower than readState.Index'  (duration: 222.099µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:09.106284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"566.71234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-cluster-critical\" ","response":"range_response_count:1 size:477"}
	{"level":"info","ts":"2024-03-14T00:58:09.106323Z","caller":"traceutil/trace.go:171","msg":"trace[1117888757] range","detail":"{range_begin:/registry/priorityclasses/system-cluster-critical; range_end:; response_count:1; response_revision:486; }","duration":"566.767606ms","start":"2024-03-14T00:58:08.539544Z","end":"2024-03-14T00:58:09.106312Z","steps":["trace[1117888757] 'agreement among raft nodes before linearized reading'  (duration: 566.625647ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T00:58:09.106326Z","caller":"traceutil/trace.go:171","msg":"trace[521017369] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"569.352008ms","start":"2024-03-14T00:58:08.536774Z","end":"2024-03-14T00:58:09.106126Z","steps":["trace[521017369] 'process raft request'  (duration: 569.154285ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.106627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.109144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:58:09.106675Z","caller":"traceutil/trace.go:171","msg":"trace[1846651354] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"162.153984ms","start":"2024-03-14T00:58:08.944509Z","end":"2024-03-14T00:58:09.106663Z","steps":["trace[1846651354] 'agreement among raft nodes before linearized reading'  (duration: 162.100913ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.106844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.019086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:58:09.106937Z","caller":"traceutil/trace.go:171","msg":"trace[1465659475] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"273.061418ms","start":"2024-03-14T00:58:08.833818Z","end":"2024-03-14T00:58:09.10688Z","steps":["trace[1465659475] 'agreement among raft nodes before linearized reading'  (duration: 273.009705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.107387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:08.536754Z","time spent":"569.87697ms","remote":"127.0.0.1:60372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":795,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-76f75df574-lptfk.17bc7ba062c6a125\" mod_revision:479 > success:<request_put:<key:\"/registry/events/kube-system/coredns-76f75df574-lptfk.17bc7ba062c6a125\" value_size:707 lease:4153601138103983990 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-76f75df574-lptfk.17bc7ba062c6a125\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:09.106356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:08.539531Z","time spent":"566.816968ms","remote":"127.0.0.1:60650","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":501,"request content":"key:\"/registry/priorityclasses/system-cluster-critical\" "}
	{"level":"warn","ts":"2024-03-14T00:58:09.106591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"563.610595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" ","response":"range_response_count:55 size:39880"}
	{"level":"info","ts":"2024-03-14T00:58:09.10801Z","caller":"traceutil/trace.go:171","msg":"trace[1734059500] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:55; response_revision:486; }","duration":"565.029177ms","start":"2024-03-14T00:58:08.542971Z","end":"2024-03-14T00:58:09.108Z","steps":["trace[1734059500] 'agreement among raft nodes before linearized reading'  (duration: 563.411427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.108115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:08.542946Z","time spent":"565.155773ms","remote":"127.0.0.1:60648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":55,"response size":39904,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" "}
	{"level":"warn","ts":"2024-03-14T00:58:09.779335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.383267ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13376973174958759907 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/busybox.17bc7ba0647d4828\" mod_revision:481 > success:<request_put:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" value_size:678 lease:4153601138103983990 >> failure:<request_range:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:09.779448Z","caller":"traceutil/trace.go:171","msg":"trace[245294986] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"664.105243ms","start":"2024-03-14T00:58:09.115327Z","end":"2024-03-14T00:58:09.779433Z","steps":["trace[245294986] 'read index received'  (duration: 406.172625ms)","trace[245294986] 'applied index is now lower than readState.Index'  (duration: 257.931557ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:09.779524Z","caller":"traceutil/trace.go:171","msg":"trace[2022133822] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"664.849816ms","start":"2024-03-14T00:58:09.114665Z","end":"2024-03-14T00:58:09.779515Z","steps":["trace[2022133822] 'process raft request'  (duration: 407.085288ms)","trace[2022133822] 'compare'  (duration: 257.068402ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:09.779594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:09.114653Z","time spent":"664.894497ms","remote":"127.0.0.1:60372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":745,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17bc7ba0647d4828\" mod_revision:481 > success:<request_put:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" value_size:678 lease:4153601138103983990 >> failure:<request_range:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:09.779821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.767481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-585806\" ","response":"range_response_count:1 size:4605"}
	{"level":"info","ts":"2024-03-14T00:58:09.779985Z","caller":"traceutil/trace.go:171","msg":"trace[1197537899] range","detail":"{range_begin:/registry/minions/no-preload-585806; range_end:; response_count:1; response_revision:487; }","duration":"154.834363ms","start":"2024-03-14T00:58:09.625039Z","end":"2024-03-14T00:58:09.779874Z","steps":["trace[1197537899] 'agreement among raft nodes before linearized reading'  (duration: 154.541066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.780123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"664.788437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2024-03-14T00:58:09.780179Z","caller":"traceutil/trace.go:171","msg":"trace[917320323] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:487; }","duration":"664.845462ms","start":"2024-03-14T00:58:09.115324Z","end":"2024-03-14T00:58:09.780169Z","steps":["trace[917320323] 'agreement among raft nodes before linearized reading'  (duration: 664.720836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.780212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:09.115296Z","time spent":"664.906193ms","remote":"127.0.0.1:60640","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":864,"request content":"key:\"/registry/clusterroles/system:aggregate-to-admin\" "}
	{"level":"info","ts":"2024-03-14T01:08:05.212167Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2024-03-14T01:08:05.21449Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.933366ms","hash":339514941}
	{"level":"info","ts":"2024-03-14T01:08:05.214548Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":339514941,"revision":796,"compact-revision":-1}
	
	
	==> kernel <==
	 01:11:36 up 14 min,  0 users,  load average: 0.08, 0.09, 0.08
	Linux no-preload-585806 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] <==
	I0314 01:06:07.746303       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:08:06.749557       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:08:06.750034       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0314 01:08:07.750188       1 handler_proxy.go:93] no RequestInfo found in the context
	W0314 01:08:07.750271       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:08:07.750389       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0314 01:08:07.750399       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:08:07.750401       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:08:07.751483       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:09:07.751204       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:07.751395       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:09:07.751425       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:09:07.752662       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:07.752718       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:09:07.752743       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:11:07.751803       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:11:07.752222       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:11:07.752259       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:11:07.754088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:11:07.754160       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:11:07.754170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] <==
	I0314 01:05:51.793026       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:06:21.377083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:06:21.800751       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:06:51.381774       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:06:51.810105       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:21.388125       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:21.818253       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:51.394365       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:51.826719       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:21.400016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:21.838419       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:51.405255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:51.850053       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:09:21.411217       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:21.857406       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:09:28.315355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="335.536µs"
	I0314 01:09:41.312637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="195.274µs"
	E0314 01:09:51.418559       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:51.867215       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:10:21.423945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:21.876480       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:10:51.429061       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:51.886216       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:11:21.434960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:11:21.895154       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] <==
	I0314 00:58:09.613353       1 server_others.go:72] "Using iptables proxy"
	I0314 00:58:09.783408       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	I0314 00:58:09.831983       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0314 00:58:09.832012       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:58:09.832028       1 server_others.go:168] "Using iptables Proxier"
	I0314 00:58:09.836006       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:58:09.836385       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0314 00:58:09.836432       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:09.837591       1 config.go:188] "Starting service config controller"
	I0314 00:58:09.837677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:58:09.838011       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:58:09.838066       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:58:09.838724       1 config.go:315] "Starting node config controller"
	I0314 00:58:09.845834       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:58:09.845879       1 shared_informer.go:318] Caches are synced for node config
	I0314 00:58:09.939146       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:58:09.939538       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] <==
	I0314 00:58:04.272869       1 serving.go:380] Generated self-signed cert in-memory
	W0314 00:58:06.721345       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:58:06.721448       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0314 00:58:06.721477       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:58:06.721501       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:58:06.750451       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 00:58:06.750571       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:06.752277       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:58:06.752392       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:58:06.753236       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:58:06.753293       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:58:06.853008       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:09:14 no-preload-585806 kubelet[1328]: E0314 01:09:14.309821    1328 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:09:14 no-preload-585806 kubelet[1328]: E0314 01:09:14.309967    1328 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:09:14 no-preload-585806 kubelet[1328]: E0314 01:09:14.310167    1328 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckkb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-7pzll_kube-system(84952403-8cff-4fa3-b7ef-d98ab0edf7a8): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 01:09:14 no-preload-585806 kubelet[1328]: E0314 01:09:14.310218    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:09:28 no-preload-585806 kubelet[1328]: E0314 01:09:28.297683    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:09:41 no-preload-585806 kubelet[1328]: E0314 01:09:41.297059    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:09:52 no-preload-585806 kubelet[1328]: E0314 01:09:52.299642    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:10:02 no-preload-585806 kubelet[1328]: E0314 01:10:02.321190    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:10:02 no-preload-585806 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:10:02 no-preload-585806 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:10:02 no-preload-585806 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:10:02 no-preload-585806 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:10:03 no-preload-585806 kubelet[1328]: E0314 01:10:03.296926    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:10:16 no-preload-585806 kubelet[1328]: E0314 01:10:16.297317    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:10:28 no-preload-585806 kubelet[1328]: E0314 01:10:28.298030    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:10:43 no-preload-585806 kubelet[1328]: E0314 01:10:43.297762    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:10:55 no-preload-585806 kubelet[1328]: E0314 01:10:55.297772    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:11:02 no-preload-585806 kubelet[1328]: E0314 01:11:02.324105    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:11:02 no-preload-585806 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:11:02 no-preload-585806 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:11:02 no-preload-585806 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:11:02 no-preload-585806 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:11:07 no-preload-585806 kubelet[1328]: E0314 01:11:07.298084    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:11:21 no-preload-585806 kubelet[1328]: E0314 01:11:21.298170    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:11:32 no-preload-585806 kubelet[1328]: E0314 01:11:32.299427    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	
	
	==> storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] <==
	I0314 00:58:09.580403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:58:39.582611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] <==
	I0314 00:58:40.613396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:58:40.627695       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:58:40.627755       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:58:40.640781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:58:40.641063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002!
	I0314 00:58:40.645742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4035f95b-5bbe-4852-a5ce-adc15b7d357d", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002 became leader
	I0314 00:58:40.741543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-585806 -n no-preload-585806
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-585806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7pzll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll: exit status 1 (63.46163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7pzll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0314 01:02:46.696409   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 01:02:47.500865   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 01:03:15.676506   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:11:43.295300065 +0000 UTC m=+6331.365061549
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-652215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-652215 logs -n 25: (2.10279178s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.783284994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378704783264112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6479afb-69f6-4909-ad28-017883b96a3d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.784262612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ca434ec-68fd-4934-94b0-d31bb6b33cb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.784333299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ca434ec-68fd-4934-94b0-d31bb6b33cb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.784692731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ca434ec-68fd-4934-94b0-d31bb6b33cb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.828520246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87101309-0e58-4ee7-a790-dd0e55d0c671 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.828592930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87101309-0e58-4ee7-a790-dd0e55d0c671 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.830937063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f8d501a-deb6-4c07-8474-a8bc4d43f392 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.831303702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378704831284662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f8d501a-deb6-4c07-8474-a8bc4d43f392 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.831977956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=776080b8-7ebe-4530-bca3-fe5bc655305a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.832079424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=776080b8-7ebe-4530-bca3-fe5bc655305a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.832267202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=776080b8-7ebe-4530-bca3-fe5bc655305a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.872369390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f44f8691-2f6e-4bfa-9960-1200d543c1ed name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.872508456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f44f8691-2f6e-4bfa-9960-1200d543c1ed name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.873590910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c0403e4-4a08-4dd7-bec8-d1e67e72199e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.874047965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378704874025155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c0403e4-4a08-4dd7-bec8-d1e67e72199e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.874668823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f530bd7-2446-43a0-b31d-dcb295060d2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.874722461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f530bd7-2446-43a0-b31d-dcb295060d2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.874909862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f530bd7-2446-43a0-b31d-dcb295060d2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.913051530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=758ce8db-aa30-4c93-8cfa-fbe09773e141 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.913151050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=758ce8db-aa30-4c93-8cfa-fbe09773e141 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.914673896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=782755bc-edd8-4c25-8bd7-6a94ebf6d1e5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.915056193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378704915035643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=782755bc-edd8-4c25-8bd7-6a94ebf6d1e5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.915718954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87ea0437-c49d-4220-9082-8b1315a4f22a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.915767758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87ea0437-c49d-4220-9082-8b1315a4f22a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:11:44 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:11:44.915948443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87ea0437-c49d-4220-9082-8b1315a4f22a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	051f66d3597a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   5fa711b504570       storage-provisioner
	cd38b11caa9ec       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f459af31fbcfc       busybox
	e87ba9e92390a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   dc13a072da6f6       coredns-5dd5756b68-cc7x2
	5306eb697d68f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   5fa711b504570       storage-provisioner
	08cdc002a4003       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   3c29862f10046       kube-proxy-s7dwp
	2ad67f5626011       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   7e10a1fb88c9c       etcd-default-k8s-diff-port-652215
	fe628f4a1ccd1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   d88399beaea4d       kube-controller-manager-default-k8s-diff-port-652215
	a4ee2cfc6f4e7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   0b6930e6937c3       kube-apiserver-default-k8s-diff-port-652215
	46a128a58b665       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   9b9d8f1b30ffc       kube-scheduler-default-k8s-diff-port-652215
	
	
	==> coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40981 - 59055 "HINFO IN 6156123757758169156.6499896433233568811. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010389216s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-652215
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-652215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=default-k8s-diff-port-652215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_50_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-652215
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:08:58 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:08:58 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:08:58 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:08:58 +0000   Thu, 14 Mar 2024 00:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.7
	  Hostname:    default-k8s-diff-port-652215
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a3cfe97cfe94492b0d86ace3f97a572
	  System UUID:                0a3cfe97-cfe9-4492-b0d8-6ace3f97a572
	  Boot ID:                    42fb8b0e-95b1-411a-afa5-f17310c551d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-cc7x2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-652215                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-652215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-652215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-s7dwp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-652215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-kll8v                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-652215 event: Registered Node default-k8s-diff-port-652215 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-652215 event: Registered Node default-k8s-diff-port-652215 in Controller
	
	
	==> dmesg <==
	[Mar14 00:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053291] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.592793] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.829169] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.646016] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 00:58] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.061012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067401] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.185645] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.163529] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.267405] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.412315] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.063391] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.072703] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.584815] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.516378] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +3.213665] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.456020] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] <==
	{"level":"info","ts":"2024-03-14T00:58:34.462305Z","caller":"traceutil/trace.go:171","msg":"trace[1882748929] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"214.686365ms","start":"2024-03-14T00:58:34.247597Z","end":"2024-03-14T00:58:34.462284Z","steps":["trace[1882748929] 'read index received'  (duration: 146.603425ms)","trace[1882748929] 'applied index is now lower than readState.Index'  (duration: 68.081858ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:34.462548Z","caller":"traceutil/trace.go:171","msg":"trace[1162118816] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"262.848445ms","start":"2024-03-14T00:58:34.199685Z","end":"2024-03-14T00:58:34.462533Z","steps":["trace[1162118816] 'process raft request'  (duration: 194.570379ms)","trace[1162118816] 'compare'  (duration: 67.690652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:34.462707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.126327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a930b4d1\" ","response":"range_response_count:1 size:962"}
	{"level":"info","ts":"2024-03-14T00:58:34.462825Z","caller":"traceutil/trace.go:171","msg":"trace[821679897] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a930b4d1; range_end:; response_count:1; response_revision:635; }","duration":"215.259321ms","start":"2024-03-14T00:58:34.247554Z","end":"2024-03-14T00:58:34.462813Z","steps":["trace[821679897] 'agreement among raft nodes before linearized reading'  (duration: 215.09155ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:34.849353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.131529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011111 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" mod_revision:601 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" value_size:690 lease:4770031337102234744 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:34.849594Z","caller":"traceutil/trace.go:171","msg":"trace[1479207147] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"307.897939ms","start":"2024-03-14T00:58:34.541673Z","end":"2024-03-14T00:58:34.849571Z","steps":["trace[1479207147] 'process raft request'  (duration: 116.191342ms)","trace[1479207147] 'compare'  (duration: 191.04465ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:34.84973Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:34.541661Z","time spent":"308.01972ms","remote":"127.0.0.1:57342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" mod_revision:601 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" value_size:690 lease:4770031337102234744 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kll8v.17bc7ba3a936be20\" > >"}
	{"level":"info","ts":"2024-03-14T00:58:55.570555Z","caller":"traceutil/trace.go:171","msg":"trace[2068194875] linearizableReadLoop","detail":"{readStateIndex:700; appliedIndex:699; }","duration":"169.085182ms","start":"2024-03-14T00:58:55.401447Z","end":"2024-03-14T00:58:55.570532Z","steps":["trace[2068194875] 'read index received'  (duration: 168.822389ms)","trace[2068194875] 'applied index is now lower than readState.Index'  (duration: 262.095µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:55.570812Z","caller":"traceutil/trace.go:171","msg":"trace[1636615843] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"359.075798ms","start":"2024-03-14T00:58:55.211716Z","end":"2024-03-14T00:58:55.570792Z","steps":["trace[1636615843] 'process raft request'  (duration: 358.594319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:55.570932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:55.211701Z","time spent":"359.168771ms","remote":"127.0.0.1:57546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-652215\" mod_revision:642 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-652215\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-652215\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:55.571511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.127987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T00:58:55.5716Z","caller":"traceutil/trace.go:171","msg":"trace[1670733807] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:653; }","duration":"170.217957ms","start":"2024-03-14T00:58:55.401371Z","end":"2024-03-14T00:58:55.571589Z","steps":["trace[1670733807] 'agreement among raft nodes before linearized reading'  (duration: 169.45442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:55.828792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.590401ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011263 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" mod_revision:645 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:55.829609Z","caller":"traceutil/trace.go:171","msg":"trace[1473180427] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"174.618874ms","start":"2024-03-14T00:58:55.654923Z","end":"2024-03-14T00:58:55.829542Z","steps":["trace[1473180427] 'process raft request'  (duration: 45.227559ms)","trace[1473180427] 'compare'  (duration: 128.471593ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:56.10999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.720551ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011266 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:42328e3a7775d741>","response":"size:39"}
	{"level":"warn","ts":"2024-03-14T00:58:56.42753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.335779ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011268 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.7\" mod_revision:647 > success:<request_put:<key:\"/registry/masterleases/192.168.61.7\" value_size:65 lease:4770031337102235457 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.7\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:56.427767Z","caller":"traceutil/trace.go:171","msg":"trace[1588088470] linearizableReadLoop","detail":"{readStateIndex:703; appliedIndex:702; }","duration":"313.760508ms","start":"2024-03-14T00:58:56.11399Z","end":"2024-03-14T00:58:56.42775Z","steps":["trace[1588088470] 'read index received'  (duration: 109.983415ms)","trace[1588088470] 'applied index is now lower than readState.Index'  (duration: 203.774851ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:56.427884Z","caller":"traceutil/trace.go:171","msg":"trace[405897399] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"315.239025ms","start":"2024-03-14T00:58:56.112631Z","end":"2024-03-14T00:58:56.42787Z","steps":["trace[405897399] 'process raft request'  (duration: 111.384885ms)","trace[405897399] 'compare'  (duration: 202.709213ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:56.428011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:56.112616Z","time spent":"315.326788ms","remote":"127.0.0.1:57288","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.7\" mod_revision:647 > success:<request_put:<key:\"/registry/masterleases/192.168.61.7\" value_size:65 lease:4770031337102235457 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.7\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:56.428116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.136047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-652215\" ","response":"range_response_count:1 size:5800"}
	{"level":"info","ts":"2024-03-14T00:58:56.428171Z","caller":"traceutil/trace.go:171","msg":"trace[495731310] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-652215; range_end:; response_count:1; response_revision:655; }","duration":"314.191952ms","start":"2024-03-14T00:58:56.11397Z","end":"2024-03-14T00:58:56.428162Z","steps":["trace[495731310] 'agreement among raft nodes before linearized reading'  (duration: 313.900456ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:56.428234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:56.113961Z","time spent":"314.262882ms","remote":"127.0.0.1:57440","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5822,"request content":"key:\"/registry/minions/default-k8s-diff-port-652215\" "}
	{"level":"info","ts":"2024-03-14T01:08:13.034459Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":876}
	{"level":"info","ts":"2024-03-14T01:08:13.037329Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":876,"took":"2.505017ms","hash":2665122675}
	{"level":"info","ts":"2024-03-14T01:08:13.037437Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2665122675,"revision":876,"compact-revision":-1}
	
	
	==> kernel <==
	 01:11:45 up 13 min,  0 users,  load average: 0.16, 0.21, 0.15
	Linux default-k8s-diff-port-652215 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] <==
	I0314 01:08:14.707155       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:08:15.707699       1 handler_proxy.go:93] no RequestInfo found in the context
	W0314 01:08:15.707700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:08:15.707942       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:08:15.707954       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0314 01:08:15.708047       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:08:15.709370       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:09:14.593233       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:09:15.709158       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:15.709219       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:09:15.709227       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:09:15.710304       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:15.710485       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:09:15.710523       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:10:14.593875       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 01:11:14.593734       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:11:15.710353       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:11:15.710486       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:11:15.710518       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:11:15.710649       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:11:15.710759       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:11:15.712546       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] <==
	I0314 01:05:58.135496       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:06:27.632842       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:06:28.144288       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:06:57.637917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:06:58.151727       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:27.643182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:28.160535       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:57.649177       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:58.171752       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:27.654564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:28.181717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:57.662630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:58.190896       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:09:27.193751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="359.869µs"
	E0314 01:09:27.668073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:28.199743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:09:41.190336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="166.415µs"
	E0314 01:09:57.673843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:58.209724       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:10:27.679046       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:28.219029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:10:57.684513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:58.227982       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:11:27.690243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:11:28.236061       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] <==
	I0314 00:58:15.762979       1 server_others.go:69] "Using iptables proxy"
	I0314 00:58:15.787000       1 node.go:141] Successfully retrieved node IP: 192.168.61.7
	I0314 00:58:15.887131       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:58:15.887150       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:58:15.890177       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:58:15.890214       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:58:15.890345       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:58:15.890353       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:15.891209       1 config.go:188] "Starting service config controller"
	I0314 00:58:15.891256       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:58:15.891278       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:58:15.891281       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:58:15.891778       1 config.go:315] "Starting node config controller"
	I0314 00:58:15.891810       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:58:15.991464       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:58:15.991664       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:58:15.991899       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] <==
	I0314 00:58:12.044288       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:58:14.657600       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:58:14.657710       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:58:14.657721       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:58:14.657727       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:58:14.718170       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:58:14.719438       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:14.729581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:58:14.731119       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:58:14.731162       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:58:14.731181       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:58:14.831673       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:09:12 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:12.192779     910 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:09:12 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:12.192867     910 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:09:12 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:12.193238     910 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9tq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-kll8v_kube-system(9060285f-ee6f-4d17-a7a6-a5a24f88d80a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 01:09:12 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:12.193313     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:09:27 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:27.174475     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:09:41 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:41.174169     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:09:52 default-k8s-diff-port-652215 kubelet[910]: E0314 01:09:52.174480     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:10:06 default-k8s-diff-port-652215 kubelet[910]: E0314 01:10:06.174725     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:10:10 default-k8s-diff-port-652215 kubelet[910]: E0314 01:10:10.205596     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:10:10 default-k8s-diff-port-652215 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:10:10 default-k8s-diff-port-652215 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:10:10 default-k8s-diff-port-652215 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:10:10 default-k8s-diff-port-652215 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:10:20 default-k8s-diff-port-652215 kubelet[910]: E0314 01:10:20.174604     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:10:33 default-k8s-diff-port-652215 kubelet[910]: E0314 01:10:33.174237     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:10:46 default-k8s-diff-port-652215 kubelet[910]: E0314 01:10:46.174822     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:11:01 default-k8s-diff-port-652215 kubelet[910]: E0314 01:11:01.174517     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:11:10 default-k8s-diff-port-652215 kubelet[910]: E0314 01:11:10.203199     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:11:10 default-k8s-diff-port-652215 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:11:10 default-k8s-diff-port-652215 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:11:10 default-k8s-diff-port-652215 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:11:10 default-k8s-diff-port-652215 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:11:14 default-k8s-diff-port-652215 kubelet[910]: E0314 01:11:14.174772     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:11:27 default-k8s-diff-port-652215 kubelet[910]: E0314 01:11:27.174042     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:11:40 default-k8s-diff-port-652215 kubelet[910]: E0314 01:11:40.176964     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	
	
	==> storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] <==
	I0314 00:58:46.528674       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:58:46.547307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:58:46.547478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:59:03.950035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:59:03.950277       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059!
	I0314 00:59:03.951276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13b9e68c-06e7-4501-9a93-d635a26c3276", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059 became leader
	I0314 00:59:04.051322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059!
	
	
	==> storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] <==
	I0314 00:58:15.719816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:58:45.722300       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kll8v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v: exit status 1 (66.232363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kll8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0314 01:03:36.336147   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 01:04:00.714257   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 01:04:35.522872   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 01:04:44.448866   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 01:04:55.862200   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 01:05:58.568488   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 01:06:11.745737   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 01:06:18.907426   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-164135 -n embed-certs-164135
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:12:27.721586289 +0000 UTC m=+6375.791347770
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-164135 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-164135 logs -n 25: (2.173541027s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.315757818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9a1d2e9-ceba-4c92-9280-d14bb23a1128 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.317347162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8b8a4fb-5d7b-4285-b003-58f8e4839f3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.317989018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378749317953487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8b8a4fb-5d7b-4285-b003-58f8e4839f3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.319304177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4f33d55-43c3-463b-9b71-60ec6f43e544 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.319376245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4f33d55-43c3-463b-9b71-60ec6f43e544 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.319684951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4f33d55-43c3-463b-9b71-60ec6f43e544 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.346829684Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2d79eb83-461d-4bb5-a520-ca29d8e0335d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.347073125Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-r2dml,Uid:d18370dd-193e-45c2-ab72-36f8155ac015,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377948997095247,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:59:01.080751158Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7b24e199-4e82-4c69-bb1f-11fb49d244fe,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1710377948971974196,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:59:01.080741764Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5875d7a24bd27cd698c36903ed9c2d9b43347ba89ef281aea4bee3d8ad973134,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-bbz2d,Uid:e6df7295-58bb-4ece-841f-f93afd3f9dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377947177189970,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-bbz2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6df7295-58bb-4ece-841f-f93afd3f9dc9,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T00:59:01.
080750098Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377941402250831,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T00:59:01.080756174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&PodSandboxMetadata{Name:kube-proxy-wjz6d,Uid:80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377941390582905,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-03-14T00:59:01.080753588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-164135,Uid:9eff47c507cfd66cf030c245f9d1227f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377936610338132,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.72:2379,kubernetes.io/config.hash: 9eff47c507cfd66cf030c245f9d1227f,kubernetes.io/config.seen: 2024-03-14T00:58:56.133400743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-c
erts-164135,Uid:8581d50187b10e539e7104520acb6dee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377936608939192,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8581d50187b10e539e7104520acb6dee,kubernetes.io/config.seen: 2024-03-14T00:58:56.082421867Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-164135,Uid:7243ee770cce457c6955feda92fc46a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377936604486615,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-1641
35,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7243ee770cce457c6955feda92fc46a2,kubernetes.io/config.seen: 2024-03-14T00:58:56.082426745Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-164135,Uid:721baa760f2eade26efc571ba635dfcb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710377936586538796,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.72:8443,kubernetes.io/config.hash: 721baa760f2eade26efc571ba635
dfcb,kubernetes.io/config.seen: 2024-03-14T00:58:56.082417773Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2d79eb83-461d-4bb5-a520-ca29d8e0335d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.348216160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dab2a19d-40d5-463f-849a-cbe80bdd5b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.348296437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dab2a19d-40d5-463f-849a-cbe80bdd5b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.348560493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dab2a19d-40d5-463f-849a-cbe80bdd5b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.373690811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76b0aead-b20a-4c22-8e55-57fb0e43d885 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.373771463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76b0aead-b20a-4c22-8e55-57fb0e43d885 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.375603620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e41276b6-585d-46ff-bdc8-728ef4bc42e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.376462496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378749376429815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e41276b6-585d-46ff-bdc8-728ef4bc42e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.377377125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d401be41-978b-4bb0-b8ce-28b553fd03e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.377457711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d401be41-978b-4bb0-b8ce-28b553fd03e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.377721965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d401be41-978b-4bb0-b8ce-28b553fd03e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.416574784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4654c6cf-638a-4c47-b6bf-0b8a4e86c24c name=/runtime.v1.RuntimeService/Version
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.416711156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4654c6cf-638a-4c47-b6bf-0b8a4e86c24c name=/runtime.v1.RuntimeService/Version
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.418083345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ce81176-336c-4360-b4c2-76d5b0d7f468 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.418802899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378749418769078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ce81176-336c-4360-b4c2-76d5b0d7f468 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.419346989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbc74ebd-3393-4cf9-9dee-3517bf5c1dd6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.419427503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbc74ebd-3393-4cf9-9dee-3517bf5c1dd6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:12:29 embed-certs-164135 crio[703]: time="2024-03-14 01:12:29.419773581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbc74ebd-3393-4cf9-9dee-3517bf5c1dd6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d987b830b81fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   da36646c444c7       storage-provisioner
	9c043e68cf38f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   2f8ccb5fcf859       busybox
	a69c7aed18e08       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f9b93e1152c04       coredns-5dd5756b68-r2dml
	2e736f3d1ff7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   da36646c444c7       storage-provisioner
	1a163fee30923       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   2a70688c9af90       kube-proxy-wjz6d
	bacb8fc976a14       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   3f49785534e5e       kube-apiserver-embed-certs-164135
	066a9f5381b01       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   074cbe23e5592       kube-scheduler-embed-certs-164135
	dbb700c9f2e3b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   190cd2c13792c       kube-controller-manager-embed-certs-164135
	24395f2c73e37       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   a3f1204219842       etcd-embed-certs-164135
	
	
	==> coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39374 - 46825 "HINFO IN 1958781166621160017.2921693539955365987. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009945916s
	
	
	==> describe nodes <==
	Name:               embed-certs-164135
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-164135
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=embed-certs-164135
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_49_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-164135
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:12:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:09:42 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:09:42 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:09:42 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:09:42 +0000   Thu, 14 Mar 2024 00:59:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.72
	  Hostname:    embed-certs-164135
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 30080eb06c724ee7b913b8bec5f80c3f
	  System UUID:                30080eb0-6c72-4ee7-b913-b8bec5f80c3f
	  Boot ID:                    81ef2eec-6092-4c2b-bffc-91c2a5c86ba1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-r2dml                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-164135                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-164135             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-164135    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-wjz6d                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-164135             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-bbz2d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-164135 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-164135 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-164135 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-164135 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-164135 event: Registered Node embed-certs-164135 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-164135 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-164135 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-164135 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-164135 event: Registered Node embed-certs-164135 in Controller
	
	
	==> dmesg <==
	[Mar14 00:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054695] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045619] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920974] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.722886] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.671564] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000063] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.340656] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.065050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067327] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.208955] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.138912] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.267254] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +5.086459] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.068193] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.976211] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[Mar14 00:59] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.003403] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +3.694951] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.943130] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] <==
	{"level":"info","ts":"2024-03-14T00:58:57.199226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:58:57.199236Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:58:57.199376Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.72:2380"}
	{"level":"info","ts":"2024-03-14T00:58:57.199404Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.72:2380"}
	{"level":"info","ts":"2024-03-14T00:58:57.200118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 switched to configuration voters=(9742586669645508546)"}
	{"level":"info","ts":"2024-03-14T00:58:57.200222Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cf1dc574e5b9e532","local-member-id":"87349ef525ad2fc2","added-peer-id":"87349ef525ad2fc2","added-peer-peer-urls":["https://192.168.50.72:2380"]}
	{"level":"info","ts":"2024-03-14T00:58:57.200346Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf1dc574e5b9e532","local-member-id":"87349ef525ad2fc2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:58:57.200393Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:58:58.929046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T00:58:58.92908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:58:58.929119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 received MsgPreVoteResp from 87349ef525ad2fc2 at term 2"}
	{"level":"info","ts":"2024-03-14T00:58:58.929132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T00:58:58.929137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 received MsgVoteResp from 87349ef525ad2fc2 at term 3"}
	{"level":"info","ts":"2024-03-14T00:58:58.929145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became leader at term 3"}
	{"level":"info","ts":"2024-03-14T00:58:58.929152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87349ef525ad2fc2 elected leader 87349ef525ad2fc2 at term 3"}
	{"level":"info","ts":"2024-03-14T00:58:58.930897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:58:58.931836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.72:2379"}
	{"level":"info","ts":"2024-03-14T00:58:58.930842Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"87349ef525ad2fc2","local-member-attributes":"{Name:embed-certs-164135 ClientURLs:[https://192.168.50.72:2379]}","request-path":"/0/members/87349ef525ad2fc2/attributes","cluster-id":"cf1dc574e5b9e532","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:58:58.935976Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:58:58.936843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:58:58.936973Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:58:58.937005Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T01:08:58.964712Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":809}
	{"level":"info","ts":"2024-03-14T01:08:58.96794Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":809,"took":"2.878508ms","hash":1106108815}
	{"level":"info","ts":"2024-03-14T01:08:58.968034Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1106108815,"revision":809,"compact-revision":-1}
	
	
	==> kernel <==
	 01:12:29 up 13 min,  0 users,  load average: 0.42, 0.20, 0.11
	Linux embed-certs-164135 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] <==
	I0314 01:09:00.308307       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:09:01.308096       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:01.308204       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:09:01.308226       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:09:01.308117       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:09:01.308301       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:09:01.309569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:10:00.269849       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:10:01.308997       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:10:01.309187       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:10:01.309243       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:10:01.310416       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:10:01.310489       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:10:01.310523       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:11:00.269896       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 01:12:00.269605       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:12:01.310187       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:12:01.310321       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:12:01.310330       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:12:01.311694       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:12:01.311747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:12:01.311755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] <==
	I0314 01:06:43.638252       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:13.176143       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:13.647988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:07:43.181958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:07:43.656412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:13.188182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:13.669927       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:08:43.194350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:08:43.678572       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:09:13.199814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:13.687293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:09:43.205208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:09:43.696307       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:10:04.187890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="310.298µs"
	E0314 01:10:13.211776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:13.707058       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:10:19.182058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="159.495µs"
	E0314 01:10:43.217909       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:10:43.716151       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:11:13.224764       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:11:13.725424       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:11:43.230809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:11:43.737411       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:12:13.237201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:12:13.745613       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] <==
	I0314 00:59:01.847041       1 server_others.go:69] "Using iptables proxy"
	I0314 00:59:01.858935       1 node.go:141] Successfully retrieved node IP: 192.168.50.72
	I0314 00:59:01.941231       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:59:01.943585       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:59:01.950873       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:59:01.950930       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:59:01.951118       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:59:01.951148       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:59:01.952033       1 config.go:188] "Starting service config controller"
	I0314 00:59:01.952073       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:59:01.952097       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:59:01.952104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:59:01.955978       1 config.go:315] "Starting node config controller"
	I0314 00:59:01.956008       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:59:02.052232       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:59:02.052289       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:59:02.056737       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] <==
	I0314 00:58:57.948121       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:59:00.363296       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:59:00.363430       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:59:00.363581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:59:00.363613       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:59:00.397704       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:59:00.397852       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:59:00.401425       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:59:00.401505       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:59:00.402565       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:59:00.405726       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:59:00.501775       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:09:56 embed-certs-164135 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:09:56 embed-certs-164135 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:09:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:09:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:10:04 embed-certs-164135 kubelet[923]: E0314 01:10:04.167927     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:10:19 embed-certs-164135 kubelet[923]: E0314 01:10:19.166284     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:10:31 embed-certs-164135 kubelet[923]: E0314 01:10:31.166883     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:10:45 embed-certs-164135 kubelet[923]: E0314 01:10:45.166408     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:10:56 embed-certs-164135 kubelet[923]: E0314 01:10:56.188174     923 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:10:56 embed-certs-164135 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:10:56 embed-certs-164135 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:10:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:10:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:10:57 embed-certs-164135 kubelet[923]: E0314 01:10:57.167112     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:11:11 embed-certs-164135 kubelet[923]: E0314 01:11:11.166965     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:11:25 embed-certs-164135 kubelet[923]: E0314 01:11:25.166774     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:11:38 embed-certs-164135 kubelet[923]: E0314 01:11:38.168119     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:11:50 embed-certs-164135 kubelet[923]: E0314 01:11:50.167184     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:11:56 embed-certs-164135 kubelet[923]: E0314 01:11:56.188067     923 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:11:56 embed-certs-164135 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:11:56 embed-certs-164135 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:11:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:11:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:12:03 embed-certs-164135 kubelet[923]: E0314 01:12:03.166580     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:12:18 embed-certs-164135 kubelet[923]: E0314 01:12:18.166010     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	
	
	==> storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] <==
	I0314 00:59:01.730500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:59:31.736208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] <==
	I0314 00:59:32.452058       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:59:32.467220       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:59:32.467394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:59:49.871581       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24948ad4-4184-4bcb-a96f-bdf0dcc6da5a", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0 became leader
	I0314 00:59:49.874256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:59:49.874511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0!
	I0314 00:59:49.976404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-164135 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bbz2d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d: exit status 1 (65.730711ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bbz2d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:06:43.240612   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:07:34.789854   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:07:46.696288   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:08:06.288640   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:08:15.675526   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:08:36.335708   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:00.714650   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:09.740061   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:35.522199   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:38.721944   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:44.448531   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:09:55.863029   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:10:23.758488   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:11:11.745502   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:11:39.383973   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:11:43.240751   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:12:46.695408   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:13:15.675978   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:13:36.335714   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:14:00.714730   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:14:35.522985   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:14:44.448912   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:14:55.862648   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (257.230145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-004791" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (243.222454ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25: (1.648350307s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:15:38 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:38.995839472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378938995814185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8a90699-e3c4-413f-b702-02627c549239 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:38 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:38.996589188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2052660-944f-498b-bbf0-95c09ccbf5be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:38 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:38.996669258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2052660-944f-498b-bbf0-95c09ccbf5be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:38 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:38.996714714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c2052660-944f-498b-bbf0-95c09ccbf5be name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.031987834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d75f63e-8b4d-4cfd-ad32-2c0c3b0ac178 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.032140749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d75f63e-8b4d-4cfd-ad32-2c0c3b0ac178 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.034207103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad3f12fc-c612-4bb9-b654-35e80f84305f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.034691984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378939034665583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad3f12fc-c612-4bb9-b654-35e80f84305f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.035261260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7e58868-fb65-4a51-9b97-721d8e8855c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.035339988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7e58868-fb65-4a51-9b97-721d8e8855c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.035381282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7e58868-fb65-4a51-9b97-721d8e8855c9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.071774306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c9a9424-d3a2-487e-987c-74efa59b4c8d name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.071878672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c9a9424-d3a2-487e-987c-74efa59b4c8d name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.073127932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ebb7aeb-338c-4012-80f1-9bec7a93431e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.073604323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378939073577299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ebb7aeb-338c-4012-80f1-9bec7a93431e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.074292543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d88a415b-60f1-4b30-9939-c4f120ff261f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.074429899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d88a415b-60f1-4b30-9939-c4f120ff261f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.074497455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d88a415b-60f1-4b30-9939-c4f120ff261f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.107940826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e41819a-7d6d-417c-85a7-e4fb5686abaa name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.108127656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e41819a-7d6d-417c-85a7-e4fb5686abaa name=/runtime.v1.RuntimeService/Version
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.109348301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d034126-bbef-4112-b123-32a848790847 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.109818837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710378939109789876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d034126-bbef-4112-b123-32a848790847 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.110394113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bae7da5-1d73-4765-8f69-3419cc828aed name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.110472859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bae7da5-1d73-4765-8f69-3419cc828aed name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:15:39 old-k8s-version-004791 crio[647]: time="2024-03-14 01:15:39.110554740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0bae7da5-1d73-4765-8f69-3419cc828aed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 00:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052991] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.890210] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.079753] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.316199] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062984] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075521] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.214616] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.150711] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.294146] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.930331] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.061685] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.999458] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +8.247240] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 01:02] systemd-fstab-generator[4935]: Ignoring "noauto" option for root device
	[Mar14 01:04] systemd-fstab-generator[5216]: Ignoring "noauto" option for root device
	[  +0.077634] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:15:39 up 17 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-004791 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: goroutine 150 [chan receive]:
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000b95b90, 0xc000d4b0e0)
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000e029b0, 0xc000e20f40)
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: goroutine 151 [chan receive]:
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000c99320)
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 14 01:15:34 old-k8s-version-004791 kubelet[6387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 14 01:15:35 old-k8s-version-004791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 14 01:15:35 old-k8s-version-004791 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 01:15:35 old-k8s-version-004791 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 01:15:35 old-k8s-version-004791 kubelet[6395]: I0314 01:15:35.194758    6395 server.go:416] Version: v1.20.0
	Mar 14 01:15:35 old-k8s-version-004791 kubelet[6395]: I0314 01:15:35.195103    6395 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 01:15:35 old-k8s-version-004791 kubelet[6395]: I0314 01:15:35.197245    6395 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 01:15:35 old-k8s-version-004791 kubelet[6395]: W0314 01:15:35.198555    6395 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 14 01:15:35 old-k8s-version-004791 kubelet[6395]: I0314 01:15:35.199088    6395 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (258.804384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-004791" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (404s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-585806 -n no-preload-585806
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:18:20.730000413 +0000 UTC m=+6728.799761888
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-585806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-585806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.451µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-585806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-585806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-585806 logs -n 25: (1.344199985s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:17 UTC |
	| start   | -p newest-cni-970859 --memory=2200 --alsologtostderr   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-970859             | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 01:17:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 01:17:17.059213   70841 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:17:17.059464   70841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:17:17.059473   70841 out.go:304] Setting ErrFile to fd 2...
	I0314 01:17:17.059478   70841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:17:17.059666   70841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 01:17:17.060253   70841 out.go:298] Setting JSON to false
	I0314 01:17:17.061238   70841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7180,"bootTime":1710371857,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 01:17:17.061302   70841 start.go:139] virtualization: kvm guest
	I0314 01:17:17.063710   70841 out.go:177] * [newest-cni-970859] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 01:17:17.065499   70841 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:17:17.067107   70841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:17:17.065390   70841 notify.go:220] Checking for updates...
	I0314 01:17:17.070323   70841 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:17:17.071952   70841 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 01:17:17.073404   70841 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 01:17:17.074815   70841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:17:17.076481   70841 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 01:17:17.076565   70841 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 01:17:17.076650   70841 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:17:17.076737   70841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:17:17.113840   70841 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 01:17:17.115171   70841 start.go:297] selected driver: kvm2
	I0314 01:17:17.115183   70841 start.go:901] validating driver "kvm2" against <nil>
	I0314 01:17:17.115194   70841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:17:17.115844   70841 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:17:17.115959   70841 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 01:17:17.131550   70841 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 01:17:17.131620   70841 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0314 01:17:17.131652   70841 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0314 01:17:17.131959   70841 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 01:17:17.132006   70841 cni.go:84] Creating CNI manager for ""
	I0314 01:17:17.132017   70841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:17:17.132032   70841 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 01:17:17.132136   70841 start.go:340] cluster config:
	{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:17:17.132242   70841 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:17:17.134639   70841 out.go:177] * Starting "newest-cni-970859" primary control-plane node in "newest-cni-970859" cluster
	I0314 01:17:17.136425   70841 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:17:17.136461   70841 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 01:17:17.136468   70841 cache.go:56] Caching tarball of preloaded images
	I0314 01:17:17.136548   70841 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 01:17:17.136559   70841 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 01:17:17.136642   70841 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:17:17.136658   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json: {Name:mkbdaa6b521d00c9337386a0938e9b7570c646ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:17.136775   70841 start.go:360] acquireMachinesLock for newest-cni-970859: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 01:17:17.136800   70841 start.go:364] duration metric: took 13.432µs to acquireMachinesLock for "newest-cni-970859"
	I0314 01:17:17.136813   70841 start.go:93] Provisioning new machine with config: &{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 01:17:17.136873   70841 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 01:17:17.138812   70841 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 01:17:17.138946   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:17:17.138979   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:17:17.153781   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0314 01:17:17.154195   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:17:17.154700   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:17:17.154721   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:17:17.155023   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:17:17.155158   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:17:17.155300   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:17.155437   70841 start.go:159] libmachine.API.Create for "newest-cni-970859" (driver="kvm2")
	I0314 01:17:17.155460   70841 client.go:168] LocalClient.Create starting
	I0314 01:17:17.155490   70841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem
	I0314 01:17:17.155519   70841 main.go:141] libmachine: Decoding PEM data...
	I0314 01:17:17.155532   70841 main.go:141] libmachine: Parsing certificate...
	I0314 01:17:17.155584   70841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem
	I0314 01:17:17.155602   70841 main.go:141] libmachine: Decoding PEM data...
	I0314 01:17:17.155613   70841 main.go:141] libmachine: Parsing certificate...
	I0314 01:17:17.155634   70841 main.go:141] libmachine: Running pre-create checks...
	I0314 01:17:17.155642   70841 main.go:141] libmachine: (newest-cni-970859) Calling .PreCreateCheck
	I0314 01:17:17.155996   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetConfigRaw
	I0314 01:17:17.156361   70841 main.go:141] libmachine: Creating machine...
	I0314 01:17:17.156374   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Create
	I0314 01:17:17.156534   70841 main.go:141] libmachine: (newest-cni-970859) Creating KVM machine...
	I0314 01:17:17.157791   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found existing default KVM network
	I0314 01:17:17.158859   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.158698   70864 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1f:5a:d2} reservation:<nil>}
	I0314 01:17:17.159677   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.159589   70864 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:02:75:0e} reservation:<nil>}
	I0314 01:17:17.160416   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.160313   70864 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:6b:a1:6d} reservation:<nil>}
	I0314 01:17:17.161474   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.161378   70864 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e70f0}
	I0314 01:17:17.161506   70841 main.go:141] libmachine: (newest-cni-970859) DBG | created network xml: 
	I0314 01:17:17.161518   70841 main.go:141] libmachine: (newest-cni-970859) DBG | <network>
	I0314 01:17:17.161535   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   <name>mk-newest-cni-970859</name>
	I0314 01:17:17.161550   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   <dns enable='no'/>
	I0314 01:17:17.161557   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   
	I0314 01:17:17.161574   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 01:17:17.161590   70841 main.go:141] libmachine: (newest-cni-970859) DBG |     <dhcp>
	I0314 01:17:17.161605   70841 main.go:141] libmachine: (newest-cni-970859) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 01:17:17.161615   70841 main.go:141] libmachine: (newest-cni-970859) DBG |     </dhcp>
	I0314 01:17:17.161641   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   </ip>
	I0314 01:17:17.161666   70841 main.go:141] libmachine: (newest-cni-970859) DBG |   
	I0314 01:17:17.161679   70841 main.go:141] libmachine: (newest-cni-970859) DBG | </network>
	I0314 01:17:17.161690   70841 main.go:141] libmachine: (newest-cni-970859) DBG | 
	I0314 01:17:17.166869   70841 main.go:141] libmachine: (newest-cni-970859) DBG | trying to create private KVM network mk-newest-cni-970859 192.168.72.0/24...
	I0314 01:17:17.239983   70841 main.go:141] libmachine: (newest-cni-970859) Setting up store path in /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859 ...
	I0314 01:17:17.240024   70841 main.go:141] libmachine: (newest-cni-970859) Building disk image from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 01:17:17.240035   70841 main.go:141] libmachine: (newest-cni-970859) DBG | private KVM network mk-newest-cni-970859 192.168.72.0/24 created
	I0314 01:17:17.240054   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.239892   70864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 01:17:17.240072   70841 main.go:141] libmachine: (newest-cni-970859) Downloading /home/jenkins/minikube-integration/18375-4912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 01:17:17.457277   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.457147   70864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa...
	I0314 01:17:17.821134   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.820980   70864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/newest-cni-970859.rawdisk...
	I0314 01:17:17.821208   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Writing magic tar header
	I0314 01:17:17.821230   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Writing SSH key tar header
	I0314 01:17:17.821247   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:17.821123   70864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859 ...
	I0314 01:17:17.821273   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859
	I0314 01:17:17.821288   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube/machines
	I0314 01:17:17.821307   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859 (perms=drwx------)
	I0314 01:17:17.821329   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube/machines (perms=drwxr-xr-x)
	I0314 01:17:17.821343   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 01:17:17.821353   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912/.minikube (perms=drwxr-xr-x)
	I0314 01:17:17.821368   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins/minikube-integration/18375-4912 (perms=drwxrwxr-x)
	I0314 01:17:17.821380   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 01:17:17.821392   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4912
	I0314 01:17:17.821407   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 01:17:17.821417   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home/jenkins
	I0314 01:17:17.821426   70841 main.go:141] libmachine: (newest-cni-970859) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 01:17:17.821436   70841 main.go:141] libmachine: (newest-cni-970859) Creating domain...
	I0314 01:17:17.821447   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Checking permissions on dir: /home
	I0314 01:17:17.821455   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Skipping /home - not owner
	I0314 01:17:17.822636   70841 main.go:141] libmachine: (newest-cni-970859) define libvirt domain using xml: 
	I0314 01:17:17.822664   70841 main.go:141] libmachine: (newest-cni-970859) <domain type='kvm'>
	I0314 01:17:17.822677   70841 main.go:141] libmachine: (newest-cni-970859)   <name>newest-cni-970859</name>
	I0314 01:17:17.822687   70841 main.go:141] libmachine: (newest-cni-970859)   <memory unit='MiB'>2200</memory>
	I0314 01:17:17.822698   70841 main.go:141] libmachine: (newest-cni-970859)   <vcpu>2</vcpu>
	I0314 01:17:17.822708   70841 main.go:141] libmachine: (newest-cni-970859)   <features>
	I0314 01:17:17.822717   70841 main.go:141] libmachine: (newest-cni-970859)     <acpi/>
	I0314 01:17:17.822724   70841 main.go:141] libmachine: (newest-cni-970859)     <apic/>
	I0314 01:17:17.822730   70841 main.go:141] libmachine: (newest-cni-970859)     <pae/>
	I0314 01:17:17.822736   70841 main.go:141] libmachine: (newest-cni-970859)     
	I0314 01:17:17.822743   70841 main.go:141] libmachine: (newest-cni-970859)   </features>
	I0314 01:17:17.822750   70841 main.go:141] libmachine: (newest-cni-970859)   <cpu mode='host-passthrough'>
	I0314 01:17:17.822778   70841 main.go:141] libmachine: (newest-cni-970859)   
	I0314 01:17:17.822786   70841 main.go:141] libmachine: (newest-cni-970859)   </cpu>
	I0314 01:17:17.822850   70841 main.go:141] libmachine: (newest-cni-970859)   <os>
	I0314 01:17:17.822882   70841 main.go:141] libmachine: (newest-cni-970859)     <type>hvm</type>
	I0314 01:17:17.822892   70841 main.go:141] libmachine: (newest-cni-970859)     <boot dev='cdrom'/>
	I0314 01:17:17.822901   70841 main.go:141] libmachine: (newest-cni-970859)     <boot dev='hd'/>
	I0314 01:17:17.822911   70841 main.go:141] libmachine: (newest-cni-970859)     <bootmenu enable='no'/>
	I0314 01:17:17.822921   70841 main.go:141] libmachine: (newest-cni-970859)   </os>
	I0314 01:17:17.822931   70841 main.go:141] libmachine: (newest-cni-970859)   <devices>
	I0314 01:17:17.822940   70841 main.go:141] libmachine: (newest-cni-970859)     <disk type='file' device='cdrom'>
	I0314 01:17:17.822954   70841 main.go:141] libmachine: (newest-cni-970859)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/boot2docker.iso'/>
	I0314 01:17:17.822966   70841 main.go:141] libmachine: (newest-cni-970859)       <target dev='hdc' bus='scsi'/>
	I0314 01:17:17.822973   70841 main.go:141] libmachine: (newest-cni-970859)       <readonly/>
	I0314 01:17:17.822983   70841 main.go:141] libmachine: (newest-cni-970859)     </disk>
	I0314 01:17:17.822993   70841 main.go:141] libmachine: (newest-cni-970859)     <disk type='file' device='disk'>
	I0314 01:17:17.823005   70841 main.go:141] libmachine: (newest-cni-970859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 01:17:17.823022   70841 main.go:141] libmachine: (newest-cni-970859)       <source file='/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/newest-cni-970859.rawdisk'/>
	I0314 01:17:17.823033   70841 main.go:141] libmachine: (newest-cni-970859)       <target dev='hda' bus='virtio'/>
	I0314 01:17:17.823043   70841 main.go:141] libmachine: (newest-cni-970859)     </disk>
	I0314 01:17:17.823053   70841 main.go:141] libmachine: (newest-cni-970859)     <interface type='network'>
	I0314 01:17:17.823079   70841 main.go:141] libmachine: (newest-cni-970859)       <source network='mk-newest-cni-970859'/>
	I0314 01:17:17.823093   70841 main.go:141] libmachine: (newest-cni-970859)       <model type='virtio'/>
	I0314 01:17:17.823103   70841 main.go:141] libmachine: (newest-cni-970859)     </interface>
	I0314 01:17:17.823114   70841 main.go:141] libmachine: (newest-cni-970859)     <interface type='network'>
	I0314 01:17:17.823131   70841 main.go:141] libmachine: (newest-cni-970859)       <source network='default'/>
	I0314 01:17:17.823142   70841 main.go:141] libmachine: (newest-cni-970859)       <model type='virtio'/>
	I0314 01:17:17.823152   70841 main.go:141] libmachine: (newest-cni-970859)     </interface>
	I0314 01:17:17.823163   70841 main.go:141] libmachine: (newest-cni-970859)     <serial type='pty'>
	I0314 01:17:17.823174   70841 main.go:141] libmachine: (newest-cni-970859)       <target port='0'/>
	I0314 01:17:17.823184   70841 main.go:141] libmachine: (newest-cni-970859)     </serial>
	I0314 01:17:17.823193   70841 main.go:141] libmachine: (newest-cni-970859)     <console type='pty'>
	I0314 01:17:17.823203   70841 main.go:141] libmachine: (newest-cni-970859)       <target type='serial' port='0'/>
	I0314 01:17:17.823215   70841 main.go:141] libmachine: (newest-cni-970859)     </console>
	I0314 01:17:17.823225   70841 main.go:141] libmachine: (newest-cni-970859)     <rng model='virtio'>
	I0314 01:17:17.823236   70841 main.go:141] libmachine: (newest-cni-970859)       <backend model='random'>/dev/random</backend>
	I0314 01:17:17.823245   70841 main.go:141] libmachine: (newest-cni-970859)     </rng>
	I0314 01:17:17.823254   70841 main.go:141] libmachine: (newest-cni-970859)     
	I0314 01:17:17.823264   70841 main.go:141] libmachine: (newest-cni-970859)     
	I0314 01:17:17.823272   70841 main.go:141] libmachine: (newest-cni-970859)   </devices>
	I0314 01:17:17.823283   70841 main.go:141] libmachine: (newest-cni-970859) </domain>
	I0314 01:17:17.823293   70841 main.go:141] libmachine: (newest-cni-970859) 
	I0314 01:17:17.827787   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:08:32:08 in network default
	I0314 01:17:17.828398   70841 main.go:141] libmachine: (newest-cni-970859) Ensuring networks are active...
	I0314 01:17:17.828419   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:17.829157   70841 main.go:141] libmachine: (newest-cni-970859) Ensuring network default is active
	I0314 01:17:17.829531   70841 main.go:141] libmachine: (newest-cni-970859) Ensuring network mk-newest-cni-970859 is active
	I0314 01:17:17.830067   70841 main.go:141] libmachine: (newest-cni-970859) Getting domain xml...
	I0314 01:17:17.831010   70841 main.go:141] libmachine: (newest-cni-970859) Creating domain...
	I0314 01:17:19.071249   70841 main.go:141] libmachine: (newest-cni-970859) Waiting to get IP...
	I0314 01:17:19.072135   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:19.072545   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:19.072578   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:19.072511   70864 retry.go:31] will retry after 284.613438ms: waiting for machine to come up
	I0314 01:17:19.359138   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:19.359606   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:19.359635   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:19.359530   70864 retry.go:31] will retry after 360.877061ms: waiting for machine to come up
	I0314 01:17:19.722064   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:19.722592   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:19.722625   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:19.722558   70864 retry.go:31] will retry after 423.998791ms: waiting for machine to come up
	I0314 01:17:20.148178   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:20.148662   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:20.148684   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:20.148612   70864 retry.go:31] will retry after 503.846002ms: waiting for machine to come up
	I0314 01:17:20.653993   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:20.654440   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:20.654486   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:20.654426   70864 retry.go:31] will retry after 535.171964ms: waiting for machine to come up
	I0314 01:17:21.190828   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:21.191312   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:21.191356   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:21.191261   70864 retry.go:31] will retry after 942.869952ms: waiting for machine to come up
	I0314 01:17:22.136063   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:22.136561   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:22.136587   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:22.136488   70864 retry.go:31] will retry after 975.733965ms: waiting for machine to come up
	I0314 01:17:23.113983   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:23.114489   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:23.114514   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:23.114429   70864 retry.go:31] will retry after 1.072624249s: waiting for machine to come up
	I0314 01:17:24.188857   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:24.189391   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:24.189427   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:24.189356   70864 retry.go:31] will retry after 1.649852224s: waiting for machine to come up
	I0314 01:17:25.841328   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:25.841758   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:25.841785   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:25.841714   70864 retry.go:31] will retry after 2.321692014s: waiting for machine to come up
	I0314 01:17:28.165757   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:28.166201   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:28.166225   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:28.166143   70864 retry.go:31] will retry after 2.682576767s: waiting for machine to come up
	I0314 01:17:30.851952   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:30.852368   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:30.852398   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:30.852322   70864 retry.go:31] will retry after 3.617138968s: waiting for machine to come up
	I0314 01:17:34.472183   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:34.472725   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:17:34.472751   70841 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:17:34.472681   70864 retry.go:31] will retry after 4.160710906s: waiting for machine to come up
	I0314 01:17:38.634912   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.635409   70841 main.go:141] libmachine: (newest-cni-970859) Found IP for machine: 192.168.72.249
	I0314 01:17:38.635427   70841 main.go:141] libmachine: (newest-cni-970859) Reserving static IP address...
	I0314 01:17:38.635437   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has current primary IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.635864   70841 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find host DHCP lease matching {name: "newest-cni-970859", mac: "52:54:00:75:c3:8f", ip: "192.168.72.249"} in network mk-newest-cni-970859
	I0314 01:17:38.713708   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Getting to WaitForSSH function...
	I0314 01:17:38.713738   70841 main.go:141] libmachine: (newest-cni-970859) Reserved static IP address: 192.168.72.249
	I0314 01:17:38.713751   70841 main.go:141] libmachine: (newest-cni-970859) Waiting for SSH to be available...
	I0314 01:17:38.716825   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.717160   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:38.717187   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.717420   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH client type: external
	I0314 01:17:38.717442   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa (-rw-------)
	I0314 01:17:38.717472   70841 main.go:141] libmachine: (newest-cni-970859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 01:17:38.717485   70841 main.go:141] libmachine: (newest-cni-970859) DBG | About to run SSH command:
	I0314 01:17:38.717496   70841 main.go:141] libmachine: (newest-cni-970859) DBG | exit 0
	I0314 01:17:38.843024   70841 main.go:141] libmachine: (newest-cni-970859) DBG | SSH cmd err, output: <nil>: 
	I0314 01:17:38.843268   70841 main.go:141] libmachine: (newest-cni-970859) KVM machine creation complete!
	I0314 01:17:38.843684   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetConfigRaw
	I0314 01:17:38.844196   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:38.844411   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:38.844596   70841 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 01:17:38.844612   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:17:38.845694   70841 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 01:17:38.845718   70841 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 01:17:38.845727   70841 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 01:17:38.845735   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:38.848162   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.848615   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:38.848643   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.848828   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:38.849032   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:38.849196   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:38.849377   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:38.849538   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:38.849742   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:38.849753   70841 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 01:17:38.954392   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:17:38.954420   70841 main.go:141] libmachine: Detecting the provisioner...
	I0314 01:17:38.954431   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:38.957309   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.957638   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:38.957688   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:38.957896   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:38.958105   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:38.958255   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:38.958421   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:38.958599   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:38.958837   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:38.958851   70841 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 01:17:39.063904   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 01:17:39.063992   70841 main.go:141] libmachine: found compatible host: buildroot
	I0314 01:17:39.064006   70841 main.go:141] libmachine: Provisioning with buildroot...
	I0314 01:17:39.064016   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:17:39.064273   70841 buildroot.go:166] provisioning hostname "newest-cni-970859"
	I0314 01:17:39.064301   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:17:39.064471   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.067302   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.067730   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.067772   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.067934   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:39.068149   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.068353   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.068504   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:39.068685   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:39.068861   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:39.068878   70841 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-970859 && echo "newest-cni-970859" | sudo tee /etc/hostname
	I0314 01:17:39.188128   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-970859
	
	I0314 01:17:39.188169   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.191111   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.191549   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.191576   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.191794   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:39.191956   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.192141   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.192255   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:39.192458   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:39.192621   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:39.192638   70841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-970859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-970859/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-970859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 01:17:39.306458   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:17:39.306483   70841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 01:17:39.306513   70841 buildroot.go:174] setting up certificates
	I0314 01:17:39.306526   70841 provision.go:84] configureAuth start
	I0314 01:17:39.306538   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:17:39.306853   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:17:39.309701   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.310154   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.310184   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.310372   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.312663   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.313023   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.313049   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.313166   70841 provision.go:143] copyHostCerts
	I0314 01:17:39.313234   70841 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 01:17:39.313250   70841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 01:17:39.313344   70841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 01:17:39.313490   70841 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 01:17:39.313512   70841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 01:17:39.313554   70841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 01:17:39.313642   70841 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 01:17:39.313654   70841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 01:17:39.313686   70841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 01:17:39.313769   70841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.newest-cni-970859 san=[127.0.0.1 192.168.72.249 localhost minikube newest-cni-970859]
	I0314 01:17:39.502252   70841 provision.go:177] copyRemoteCerts
	I0314 01:17:39.502324   70841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 01:17:39.502353   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.505126   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.505575   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.505613   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.505879   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:39.506108   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.506307   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:39.506452   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:17:39.593777   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 01:17:39.620520   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 01:17:39.647928   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 01:17:39.677003   70841 provision.go:87] duration metric: took 370.465309ms to configureAuth
	I0314 01:17:39.677051   70841 buildroot.go:189] setting minikube options for container-runtime
	I0314 01:17:39.677217   70841 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:17:39.677303   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.680628   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.681011   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.681037   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.681250   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:39.681436   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.681622   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.681755   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:39.681945   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:39.682095   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:39.682109   70841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 01:17:39.966459   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 01:17:39.966488   70841 main.go:141] libmachine: Checking connection to Docker...
	I0314 01:17:39.966497   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetURL
	I0314 01:17:39.967904   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Using libvirt version 6000000
	I0314 01:17:39.970278   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.970643   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.970671   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.970854   70841 main.go:141] libmachine: Docker is up and running!
	I0314 01:17:39.970872   70841 main.go:141] libmachine: Reticulating splines...
	I0314 01:17:39.970879   70841 client.go:171] duration metric: took 22.815410914s to LocalClient.Create
	I0314 01:17:39.970903   70841 start.go:167] duration metric: took 22.815465819s to libmachine.API.Create "newest-cni-970859"
	I0314 01:17:39.970915   70841 start.go:293] postStartSetup for "newest-cni-970859" (driver="kvm2")
	I0314 01:17:39.970930   70841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 01:17:39.970951   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:39.971182   70841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 01:17:39.971205   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:39.973364   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.973796   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:39.973825   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:39.973991   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:39.974170   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:39.974350   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:39.974512   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:17:40.058263   70841 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 01:17:40.062823   70841 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 01:17:40.062851   70841 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 01:17:40.062919   70841 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 01:17:40.063009   70841 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 01:17:40.063110   70841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 01:17:40.073852   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 01:17:40.101093   70841 start.go:296] duration metric: took 130.16348ms for postStartSetup
	I0314 01:17:40.101155   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetConfigRaw
	I0314 01:17:40.101757   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:17:40.104570   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.104949   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:40.104979   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.105220   70841 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:17:40.105459   70841 start.go:128] duration metric: took 22.968573365s to createHost
	I0314 01:17:40.105487   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:40.108061   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.108411   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:40.108445   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.108536   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:40.108712   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:40.108871   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:40.109019   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:40.109186   70841 main.go:141] libmachine: Using SSH client type: native
	I0314 01:17:40.109377   70841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:17:40.109403   70841 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 01:17:40.212122   70841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710379060.161503029
	
	I0314 01:17:40.212143   70841 fix.go:216] guest clock: 1710379060.161503029
	I0314 01:17:40.212153   70841 fix.go:229] Guest: 2024-03-14 01:17:40.161503029 +0000 UTC Remote: 2024-03-14 01:17:40.105472607 +0000 UTC m=+23.095275931 (delta=56.030422ms)
	I0314 01:17:40.212200   70841 fix.go:200] guest clock delta is within tolerance: 56.030422ms
	I0314 01:17:40.212209   70841 start.go:83] releasing machines lock for "newest-cni-970859", held for 23.075401215s
	I0314 01:17:40.212238   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:40.212533   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:17:40.215668   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.215985   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:40.216008   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.216189   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:40.216797   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:40.217004   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:17:40.217106   70841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 01:17:40.217148   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:40.217199   70841 ssh_runner.go:195] Run: cat /version.json
	I0314 01:17:40.217221   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:17:40.219913   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.220247   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:40.220287   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.220402   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.220532   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:40.220715   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:40.220824   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:40.220854   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:40.220857   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:40.221028   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:17:40.221028   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:17:40.221227   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:17:40.221376   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:17:40.221548   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:17:40.296026   70841 ssh_runner.go:195] Run: systemctl --version
	I0314 01:17:40.337974   70841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 01:17:40.500492   70841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 01:17:40.507070   70841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 01:17:40.507152   70841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 01:17:40.525857   70841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 01:17:40.525884   70841 start.go:494] detecting cgroup driver to use...
	I0314 01:17:40.525960   70841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 01:17:40.542432   70841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 01:17:40.556330   70841 docker.go:217] disabling cri-docker service (if available) ...
	I0314 01:17:40.556376   70841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 01:17:40.571925   70841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 01:17:40.589189   70841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 01:17:40.719002   70841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 01:17:40.901038   70841 docker.go:233] disabling docker service ...
	I0314 01:17:40.901120   70841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 01:17:40.917274   70841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 01:17:40.932634   70841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 01:17:41.054016   70841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 01:17:41.187452   70841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 01:17:41.202929   70841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 01:17:41.223367   70841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 01:17:41.223438   70841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:17:41.235067   70841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 01:17:41.235125   70841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:17:41.247545   70841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:17:41.260651   70841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:17:41.273268   70841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 01:17:41.286246   70841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 01:17:41.298074   70841 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 01:17:41.298130   70841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 01:17:41.312877   70841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 01:17:41.323918   70841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:17:41.450530   70841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 01:17:41.608708   70841 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 01:17:41.608777   70841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 01:17:41.613983   70841 start.go:562] Will wait 60s for crictl version
	I0314 01:17:41.614040   70841 ssh_runner.go:195] Run: which crictl
	I0314 01:17:41.618059   70841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 01:17:41.660775   70841 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 01:17:41.660848   70841 ssh_runner.go:195] Run: crio --version
	I0314 01:17:41.691947   70841 ssh_runner.go:195] Run: crio --version
	I0314 01:17:41.728034   70841 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 01:17:41.729508   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:17:41.732542   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:41.732884   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:17:41.732913   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:17:41.733188   70841 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 01:17:41.737933   70841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:17:41.754496   70841 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0314 01:17:41.755874   70841 kubeadm.go:877] updating cluster {Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 01:17:41.756023   70841 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:17:41.756101   70841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:17:41.791359   70841 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 01:17:41.791420   70841 ssh_runner.go:195] Run: which lz4
	I0314 01:17:41.795693   70841 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 01:17:41.800205   70841 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 01:17:41.800236   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0314 01:17:43.401137   70841 crio.go:444] duration metric: took 1.605479677s to copy over tarball
	I0314 01:17:43.401243   70841 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 01:17:45.804885   70841 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.403607137s)
	I0314 01:17:45.804914   70841 crio.go:451] duration metric: took 2.403751279s to extract the tarball
	I0314 01:17:45.804937   70841 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 01:17:45.845972   70841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:17:45.893008   70841 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 01:17:45.893029   70841 cache_images.go:84] Images are preloaded, skipping loading
	I0314 01:17:45.893038   70841 kubeadm.go:928] updating node { 192.168.72.249 8443 v1.29.0-rc.2 crio true true} ...
	I0314 01:17:45.893152   70841 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-970859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 01:17:45.893231   70841 ssh_runner.go:195] Run: crio config
	I0314 01:17:45.942720   70841 cni.go:84] Creating CNI manager for ""
	I0314 01:17:45.942744   70841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:17:45.942756   70841 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0314 01:17:45.942807   70841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.249 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-970859 NodeName:newest-cni-970859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 01:17:45.942996   70841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-970859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 01:17:45.943072   70841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 01:17:45.955136   70841 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 01:17:45.955196   70841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 01:17:45.965035   70841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0314 01:17:45.983511   70841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 01:17:46.001543   70841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0314 01:17:46.021327   70841 ssh_runner.go:195] Run: grep 192.168.72.249	control-plane.minikube.internal$ /etc/hosts
	I0314 01:17:46.025950   70841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:17:46.039653   70841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:17:46.196226   70841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:17:46.218085   70841 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859 for IP: 192.168.72.249
	I0314 01:17:46.218112   70841 certs.go:194] generating shared ca certs ...
	I0314 01:17:46.218132   70841 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:46.218294   70841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 01:17:46.218352   70841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 01:17:46.218364   70841 certs.go:256] generating profile certs ...
	I0314 01:17:46.218447   70841 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.key
	I0314 01:17:46.218466   70841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.crt with IP's: []
	I0314 01:17:46.629800   70841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.crt ...
	I0314 01:17:46.629834   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.crt: {Name:mk52580084ba12bb290357dcbd864c221add418e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:46.630057   70841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.key ...
	I0314 01:17:46.630075   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.key: {Name:mkbba2bbf5df27bde4c90964ee92eb449eb12328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:46.630193   70841 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key.b2d72356
	I0314 01:17:46.630215   70841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt.b2d72356 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.249]
	I0314 01:17:47.039897   70841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt.b2d72356 ...
	I0314 01:17:47.039928   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt.b2d72356: {Name:mk3bfb194142c40956738838d47289d78902ee27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:47.040106   70841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key.b2d72356 ...
	I0314 01:17:47.040125   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key.b2d72356: {Name:mk876d9e96e48d04e9ed8d2197c6bee59c28e77b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:47.040225   70841 certs.go:381] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt.b2d72356 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt
	I0314 01:17:47.040320   70841 certs.go:385] copying /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key.b2d72356 -> /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key
	I0314 01:17:47.040417   70841 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key
	I0314 01:17:47.040439   70841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.crt with IP's: []
	I0314 01:17:47.111051   70841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.crt ...
	I0314 01:17:47.111080   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.crt: {Name:mka5585d55fecd6302f186ffe21fc7a38309b14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:47.111269   70841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key ...
	I0314 01:17:47.111284   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key: {Name:mk85ec03f9809a7ce7d5e31fa85adbf8e686591e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:17:47.111490   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 01:17:47.111535   70841 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 01:17:47.111543   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 01:17:47.111578   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 01:17:47.111609   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 01:17:47.111639   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 01:17:47.111707   70841 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 01:17:47.112388   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 01:17:47.154344   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 01:17:47.196023   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 01:17:47.233600   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 01:17:47.263579   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 01:17:47.291487   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 01:17:47.322162   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 01:17:47.350609   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 01:17:47.378248   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 01:17:47.406139   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 01:17:47.433917   70841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 01:17:47.464378   70841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 01:17:47.485579   70841 ssh_runner.go:195] Run: openssl version
	I0314 01:17:47.492022   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 01:17:47.504978   70841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 01:17:47.510324   70841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 01:17:47.510387   70841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 01:17:47.517028   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 01:17:47.529432   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 01:17:47.542721   70841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:17:47.547740   70841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:17:47.547797   70841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:17:47.553991   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 01:17:47.566694   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 01:17:47.580187   70841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 01:17:47.585407   70841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 01:17:47.585467   70841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 01:17:47.591967   70841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 01:17:47.606162   70841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 01:17:47.610476   70841 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 01:17:47.610531   70841 kubeadm.go:391] StartCluster: {Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:17:47.610609   70841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 01:17:47.610654   70841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 01:17:47.656400   70841 cri.go:89] found id: ""
	I0314 01:17:47.656478   70841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 01:17:47.669668   70841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:17:47.681196   70841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:17:47.692509   70841 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:17:47.692530   70841 kubeadm.go:156] found existing configuration files:
	
	I0314 01:17:47.692635   70841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:17:47.702902   70841 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:17:47.702958   70841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:17:47.716238   70841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:17:47.727713   70841 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:17:47.727765   70841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:17:47.739048   70841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:17:47.749784   70841 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:17:47.749845   70841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:17:47.761026   70841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:17:47.772590   70841 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:17:47.772655   70841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:17:47.783931   70841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:17:47.922387   70841 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0314 01:17:47.922477   70841 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:17:48.092046   70841 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:17:48.092255   70841 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:17:48.092384   70841 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:17:48.324369   70841 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:17:48.478500   70841 out.go:204]   - Generating certificates and keys ...
	I0314 01:17:48.478654   70841 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:17:48.478757   70841 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:17:48.478913   70841 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 01:17:48.784701   70841 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 01:17:49.061051   70841 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 01:17:49.197564   70841 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 01:17:49.384583   70841 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 01:17:49.384865   70841 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-970859] and IPs [192.168.72.249 127.0.0.1 ::1]
	I0314 01:17:49.523092   70841 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 01:17:49.523568   70841 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-970859] and IPs [192.168.72.249 127.0.0.1 ::1]
	I0314 01:17:49.659214   70841 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 01:17:49.730503   70841 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 01:17:50.046706   70841 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 01:17:50.046953   70841 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:17:50.600223   70841 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:17:50.712091   70841 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0314 01:17:50.802454   70841 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:17:50.935737   70841 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:17:51.048736   70841 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:17:51.049289   70841 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:17:51.052102   70841 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:17:51.055437   70841 out.go:204]   - Booting up control plane ...
	I0314 01:17:51.055592   70841 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:17:51.055700   70841 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:17:51.055789   70841 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:17:51.070486   70841 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:17:51.071744   70841 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:17:51.071815   70841 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:17:51.213693   70841 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:17:56.721226   70841 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505693 seconds
	I0314 01:17:56.742546   70841 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 01:17:56.770641   70841 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 01:17:57.297161   70841 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 01:17:57.297429   70841 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-970859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 01:17:57.814716   70841 kubeadm.go:309] [bootstrap-token] Using token: 1whjxu.6yfvhr0jn8mzei52
	I0314 01:17:57.816266   70841 out.go:204]   - Configuring RBAC rules ...
	I0314 01:17:57.816405   70841 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 01:17:57.822868   70841 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 01:17:57.831527   70841 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 01:17:57.837274   70841 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 01:17:57.844411   70841 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 01:17:57.848488   70841 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 01:17:57.862889   70841 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 01:17:58.172664   70841 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 01:17:58.238353   70841 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 01:17:58.239243   70841 kubeadm.go:309] 
	I0314 01:17:58.239328   70841 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 01:17:58.239342   70841 kubeadm.go:309] 
	I0314 01:17:58.239430   70841 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 01:17:58.239440   70841 kubeadm.go:309] 
	I0314 01:17:58.239473   70841 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 01:17:58.239552   70841 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 01:17:58.239626   70841 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 01:17:58.239633   70841 kubeadm.go:309] 
	I0314 01:17:58.239720   70841 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 01:17:58.239727   70841 kubeadm.go:309] 
	I0314 01:17:58.239793   70841 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 01:17:58.239799   70841 kubeadm.go:309] 
	I0314 01:17:58.239871   70841 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 01:17:58.239963   70841 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 01:17:58.240055   70841 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 01:17:58.240068   70841 kubeadm.go:309] 
	I0314 01:17:58.240158   70841 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 01:17:58.240262   70841 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 01:17:58.240274   70841 kubeadm.go:309] 
	I0314 01:17:58.240416   70841 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1whjxu.6yfvhr0jn8mzei52 \
	I0314 01:17:58.240575   70841 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c \
	I0314 01:17:58.240613   70841 kubeadm.go:309] 	--control-plane 
	I0314 01:17:58.240624   70841 kubeadm.go:309] 
	I0314 01:17:58.240767   70841 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 01:17:58.240778   70841 kubeadm.go:309] 
	I0314 01:17:58.240887   70841 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1whjxu.6yfvhr0jn8mzei52 \
	I0314 01:17:58.241019   70841 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b198eee1163731c3270944d322d4b531b28fdfe7e5274765d5fd11524f0e288c 
	I0314 01:17:58.242471   70841 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:17:58.242504   70841 cni.go:84] Creating CNI manager for ""
	I0314 01:17:58.242514   70841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:17:58.245425   70841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 01:17:58.246868   70841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 01:17:58.280899   70841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 01:17:58.328720   70841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 01:17:58.328798   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:17:58.328798   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-970859 minikube.k8s.io/updated_at=2024_03_14T01_17_58_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=newest-cni-970859 minikube.k8s.io/primary=true
	I0314 01:17:58.445242   70841 ops.go:34] apiserver oom_adj: -16
	I0314 01:17:58.653451   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:17:59.153863   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:17:59.653800   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:00.153705   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:00.653810   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:01.154257   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:01.653587   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:02.154108   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:02.654417   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:03.154480   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:03.654427   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:04.153656   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:04.653704   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:05.154461   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:05.653535   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:06.154121   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:06.653725   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:07.153839   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:07.654011   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:08.153955   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:08.654211   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:09.153740   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:09.653841   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:10.153406   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:10.653471   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:11.153670   70841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 01:18:11.274304   70841 kubeadm.go:1106] duration metric: took 12.945575221s to wait for elevateKubeSystemPrivileges
	W0314 01:18:11.274353   70841 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 01:18:11.274362   70841 kubeadm.go:393] duration metric: took 23.663835219s to StartCluster
	I0314 01:18:11.274382   70841 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:11.274460   70841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:18:11.276164   70841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:11.276408   70841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 01:18:11.276421   70841 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 01:18:11.279535   70841 out.go:177] * Verifying Kubernetes components...
	I0314 01:18:11.276533   70841 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 01:18:11.276619   70841 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:11.279572   70841 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-970859"
	I0314 01:18:11.279597   70841 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-970859"
	I0314 01:18:11.279626   70841 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:11.279637   70841 addons.go:69] Setting default-storageclass=true in profile "newest-cni-970859"
	I0314 01:18:11.279666   70841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-970859"
	I0314 01:18:11.281416   70841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:18:11.279987   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:11.281533   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:11.279989   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:11.281589   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:11.298092   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0314 01:18:11.298505   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0314 01:18:11.298665   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:11.299230   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:18:11.299259   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:11.299299   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:11.299665   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:11.299817   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:18:11.299837   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:11.299851   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:11.300176   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:11.300670   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:11.300696   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:11.304066   70841 addons.go:234] Setting addon default-storageclass=true in "newest-cni-970859"
	I0314 01:18:11.304105   70841 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:11.304475   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:11.304521   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:11.316855   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44529
	I0314 01:18:11.317421   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:11.318138   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:18:11.318171   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:11.318593   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:11.318846   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:11.320503   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I0314 01:18:11.320938   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:11.320963   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:11.323411   70841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 01:18:11.321516   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:18:11.325285   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:11.325407   70841 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:18:11.325429   70841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 01:18:11.325449   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:11.325783   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:11.327020   70841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:11.327072   70841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:11.330460   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:11.330980   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:11.331009   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:11.331321   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:11.331524   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:11.331707   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:11.331844   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:11.344802   70841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I0314 01:18:11.345224   70841 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:11.345705   70841 main.go:141] libmachine: Using API Version  1
	I0314 01:18:11.345718   70841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:11.346022   70841 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:11.346184   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:11.347894   70841 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:11.348230   70841 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 01:18:11.348241   70841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 01:18:11.348255   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:11.351777   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:11.352231   70841 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:17:32 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:11.352245   70841 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:11.352458   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:11.352687   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:11.352838   70841 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:11.352996   70841 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:11.592251   70841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 01:18:11.592279   70841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:18:11.742739   70841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:18:11.832206   70841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:18:12.742971   70841 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.150675947s)
	I0314 01:18:12.743008   70841 start.go:948] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0314 01:18:12.743021   70841 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.150715306s)
	I0314 01:18:12.744202   70841 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:18:12.744257   70841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:18:12.976458   70841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.144217658s)
	I0314 01:18:12.976509   70841 main.go:141] libmachine: Making call to close driver server
	I0314 01:18:12.976510   70841 api_server.go:72] duration metric: took 1.70005434s to wait for apiserver process to appear ...
	I0314 01:18:12.976522   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:18:12.976528   70841 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:18:12.976533   70841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233759764s)
	I0314 01:18:12.976550   70841 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:12.976559   70841 main.go:141] libmachine: Making call to close driver server
	I0314 01:18:12.976577   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:18:12.976857   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:18:12.976916   70841 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:18:12.976926   70841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:18:12.976934   70841 main.go:141] libmachine: Making call to close driver server
	I0314 01:18:12.976935   70841 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:18:12.976946   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:18:12.976949   70841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:18:12.977004   70841 main.go:141] libmachine: Making call to close driver server
	I0314 01:18:12.977014   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:18:12.977349   70841 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:18:12.977384   70841 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:18:12.977412   70841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:18:12.977445   70841 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:18:12.977469   70841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:18:12.993091   70841 api_server.go:279] https://192.168.72.249:8443/healthz returned 200:
	ok
	I0314 01:18:12.996481   70841 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:18:12.996515   70841 api_server.go:131] duration metric: took 19.979521ms to wait for apiserver health ...
	I0314 01:18:12.996526   70841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:18:13.006744   70841 main.go:141] libmachine: Making call to close driver server
	I0314 01:18:13.006793   70841 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:18:13.007118   70841 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:18:13.007140   70841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:18:13.010866   70841 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 01:18:13.011947   70841 addons.go:505] duration metric: took 1.735414359s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 01:18:13.011798   70841 system_pods.go:59] 8 kube-system pods found
	I0314 01:18:13.011988   70841 system_pods.go:61] "coredns-76f75df574-79jw7" [cbe445df-f726-472b-b0ff-02b1a3ba4747] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 01:18:13.012003   70841 system_pods.go:61] "coredns-76f75df574-kdg7j" [85edaa1f-af91-478d-902b-9e128799b04d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 01:18:13.012011   70841 system_pods.go:61] "etcd-newest-cni-970859" [0c78288d-afc9-4ea5-90a6-2d6ac021a743] Running
	I0314 01:18:13.012021   70841 system_pods.go:61] "kube-apiserver-newest-cni-970859" [8fac6b39-ce44-466c-a2e6-a7eac41c65be] Running
	I0314 01:18:13.012027   70841 system_pods.go:61] "kube-controller-manager-newest-cni-970859" [231f4ead-1cc9-4e25-b8e9-6444bc103e24] Running
	I0314 01:18:13.012032   70841 system_pods.go:61] "kube-proxy-hpk8q" [b139648f-e89d-4f2e-be3b-445dec5997dd] Running
	I0314 01:18:13.012037   70841 system_pods.go:61] "kube-scheduler-newest-cni-970859" [94d0802f-2581-4ac5-93ca-63dd410a95c4] Running
	I0314 01:18:13.012041   70841 system_pods.go:61] "storage-provisioner" [25ba16d4-5e06-4666-b312-af1619365c85] Pending
	I0314 01:18:13.012048   70841 system_pods.go:74] duration metric: took 15.515191ms to wait for pod list to return data ...
	I0314 01:18:13.012057   70841 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:18:13.022148   70841 default_sa.go:45] found service account: "default"
	I0314 01:18:13.022248   70841 default_sa.go:55] duration metric: took 10.182478ms for default service account to be created ...
	I0314 01:18:13.022279   70841 kubeadm.go:576] duration metric: took 1.745821818s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 01:18:13.022313   70841 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:18:13.025979   70841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:18:13.026019   70841 node_conditions.go:123] node cpu capacity is 2
	I0314 01:18:13.026035   70841 node_conditions.go:105] duration metric: took 3.713882ms to run NodePressure ...
	I0314 01:18:13.026049   70841 start.go:240] waiting for startup goroutines ...
	I0314 01:18:13.248433   70841 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-970859" context rescaled to 1 replicas
	I0314 01:18:13.248470   70841 start.go:245] waiting for cluster config update ...
	I0314 01:18:13.248487   70841 start.go:254] writing updated cluster config ...
	I0314 01:18:13.248769   70841 ssh_runner.go:195] Run: rm -f paused
	I0314 01:18:13.300489   70841 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:18:13.303167   70841 out.go:177] * Done! kubectl is now configured to use "newest-cni-970859" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.448792113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379101448770037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=935ed587-e5eb-4420-88da-14ecbb3e87d9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.449758806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bc1831e-3c5e-481d-b476-1b6f56d90a44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.449812160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bc1831e-3c5e-481d-b476-1b6f56d90a44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.450106724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bc1831e-3c5e-481d-b476-1b6f56d90a44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.496038848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e196e951-c037-4225-8613-6ecfc1fe1718 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.496149100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e196e951-c037-4225-8613-6ecfc1fe1718 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.497627739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba8d9d18-51f0-46e6-9cf5-69b3cedaa118 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.498088491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379101498063179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba8d9d18-51f0-46e6-9cf5-69b3cedaa118 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.502984138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40d99a82-b3f8-45a4-ab9e-9f9f6a547fb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.503228783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40d99a82-b3f8-45a4-ab9e-9f9f6a547fb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.503516522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40d99a82-b3f8-45a4-ab9e-9f9f6a547fb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.552482930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=510f7327-f03c-4c33-b544-36325ec89470 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.552582964Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=510f7327-f03c-4c33-b544-36325ec89470 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.554257698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d99b5e6-ef98-4c52-9bca-a35cae9ce77c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.554577127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379101554556340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d99b5e6-ef98-4c52-9bca-a35cae9ce77c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.555270613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f64cc30d-64af-45d4-b4fa-46affdc0a835 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.555350223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f64cc30d-64af-45d4-b4fa-46affdc0a835 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.555551902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f64cc30d-64af-45d4-b4fa-46affdc0a835 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.593235147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b4c53bb-e2cd-493a-87f1-d8fff36d429b name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.593334376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b4c53bb-e2cd-493a-87f1-d8fff36d429b name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.594679674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c834853-a35f-4cb4-bdd2-82f57da0a890 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.595148098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379101595121065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c834853-a35f-4cb4-bdd2-82f57da0a890 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.595686360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84b2cb1a-51be-47a5-a1b5-faf2823a32c5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.595775413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84b2cb1a-51be-47a5-a1b5-faf2823a32c5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:21 no-preload-585806 crio[696]: time="2024-03-14 01:18:21.596151838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377920512797308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c016d74dfbbf394363811f4d82151fc6495aa1b501a0b5248d9a8a13ce355d5,PodSandboxId:77a92686f45fb38b5ab70ce0519b56c6f166843186b322e8eb0c86b61928c055,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377897681117082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dfd2648-2774-42e2-8674-f4f1b8cc2856,},Annotations:map[string]string{io.kubernetes.container.hash: a9ac11a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2,PodSandboxId:1135d8ed633c0cfc57a4e79393d9dfebb02071f8f70bb7a0df5780b0c5c1dfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710377895293177524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lptfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 597ce2ed-6ab6-418e-9720-9ae9d275cb33,},Annotations:map[string]string{io.kubernetes.container.hash: 8248fac3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0,PodSandboxId:3ec37274976803c6265881971f74bc9b7314b267cdcbd23d05927b7555762e73,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377888700609032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
13f608a-28d1-4365-9898-dd6f37150317,},Annotations:map[string]string{io.kubernetes.container.hash: f80b072c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828,PodSandboxId:c6c6fd086a01a2761e9099b3c6d6ccd2ad9dbfc68a4f02f13569529ae2bcf4b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710377888683363130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wpdb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 013df8e8-ce80-4cff-937a-16742369c5
61,},Annotations:map[string]string{io.kubernetes.container.hash: da357755,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2,PodSandboxId:1b6c4eb38b6dd3afa645977da44ebd6cfdd7e2d40cf96a976fe4ef3958f99ab8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710377883040966665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108778185192fe31
95fda362ff928a03,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b,PodSandboxId:47abf83a24220ca0cd550626b9c27bea49e535fafacb3bcf7f73dda8fc92c5de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710377883092021845,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96a85171c835c0ee3580825ac290b83,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9f5528d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf,PodSandboxId:de219d83395e9985021633273931b05ecb6669a4195154bf30455fbf5317ffc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710377883011351668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e656d08e1c0674b0323bc28bbc43a651,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239,PodSandboxId:e4e910e75784faad378a2f043b7693c41425fd01f539054c2bf5acee1d9e14cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710377882999235667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-585806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e8e3458d0fcc73b22639020e4dbe845,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: a84dd2b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84b2cb1a-51be-47a5-a1b5-faf2823a32c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ba8fd6893aa1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   3ec3727497680       storage-provisioner
	3c016d74dfbbf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   77a92686f45fb       busybox
	7a23310363170       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   1135d8ed633c0       coredns-76f75df574-lptfk
	3d431baedcd8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   3ec3727497680       storage-provisioner
	3c9a4136bfd32       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      20 minutes ago      Running             kube-proxy                1                   c6c6fd086a01a       kube-proxy-wpdb9
	d05f2a8d7b1aa       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      20 minutes ago      Running             etcd                      1                   47abf83a24220       etcd-no-preload-585806
	396e0c2ab791a       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      20 minutes ago      Running             kube-controller-manager   1                   1b6c4eb38b6dd       kube-controller-manager-no-preload-585806
	eaf7cd9d2f3f8       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      20 minutes ago      Running             kube-scheduler            1                   de219d83395e9       kube-scheduler-no-preload-585806
	310169fe474c4       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      20 minutes ago      Running             kube-apiserver            1                   e4e910e75784f       kube-apiserver-no-preload-585806
	
	
	==> coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37552 - 35520 "HINFO IN 2493074614276229977.5371900738671167779. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008631751s
	
	
	==> describe nodes <==
	Name:               no-preload-585806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-585806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=no-preload-585806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_50_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-585806
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:18:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:13:56 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:13:56 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:13:56 +0000   Thu, 14 Mar 2024 00:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:13:56 +0000   Thu, 14 Mar 2024 00:58:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    no-preload-585806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 90a091811dad4078bf279872b150db37
	  System UUID:                90a09181-1dad-4078-bf27-9872b150db37
	  Boot ID:                    7b4921fb-3e23-45df-a6de-d03fc0ff22c5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-lptfk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-no-preload-585806                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kube-apiserver-no-preload-585806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-no-preload-585806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-wpdb9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-no-preload-585806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 metrics-server-57f55c9bc5-7pzll              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node no-preload-585806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node no-preload-585806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node no-preload-585806 status is now: NodeHasSufficientPID
	  Normal  NodeReady                27m                kubelet          Node no-preload-585806 status is now: NodeReady
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                node-controller  Node no-preload-585806 event: Registered Node no-preload-585806 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-585806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-585806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-585806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-585806 event: Registered Node no-preload-585806 in Controller
	
	
	==> dmesg <==
	[Mar14 00:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052411] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041611] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.522446] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.859067] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.654073] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.669454] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.063148] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067479] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.199936] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.154049] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.290727] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[ +16.789590] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.065797] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 00:58] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +5.651706] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.513730] systemd-fstab-generator[1929]: Ignoring "noauto" option for root device
	[  +1.272423] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.900161] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] <==
	{"level":"info","ts":"2024-03-14T00:58:09.10801Z","caller":"traceutil/trace.go:171","msg":"trace[1734059500] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:55; response_revision:486; }","duration":"565.029177ms","start":"2024-03-14T00:58:08.542971Z","end":"2024-03-14T00:58:09.108Z","steps":["trace[1734059500] 'agreement among raft nodes before linearized reading'  (duration: 563.411427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.108115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:08.542946Z","time spent":"565.155773ms","remote":"127.0.0.1:60648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":55,"response size":39904,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" "}
	{"level":"warn","ts":"2024-03-14T00:58:09.779335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.383267ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13376973174958759907 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/busybox.17bc7ba0647d4828\" mod_revision:481 > success:<request_put:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" value_size:678 lease:4153601138103983990 >> failure:<request_range:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:09.779448Z","caller":"traceutil/trace.go:171","msg":"trace[245294986] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"664.105243ms","start":"2024-03-14T00:58:09.115327Z","end":"2024-03-14T00:58:09.779433Z","steps":["trace[245294986] 'read index received'  (duration: 406.172625ms)","trace[245294986] 'applied index is now lower than readState.Index'  (duration: 257.931557ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:09.779524Z","caller":"traceutil/trace.go:171","msg":"trace[2022133822] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"664.849816ms","start":"2024-03-14T00:58:09.114665Z","end":"2024-03-14T00:58:09.779515Z","steps":["trace[2022133822] 'process raft request'  (duration: 407.085288ms)","trace[2022133822] 'compare'  (duration: 257.068402ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:09.779594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:09.114653Z","time spent":"664.894497ms","remote":"127.0.0.1:60372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":745,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17bc7ba0647d4828\" mod_revision:481 > success:<request_put:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" value_size:678 lease:4153601138103983990 >> failure:<request_range:<key:\"/registry/events/default/busybox.17bc7ba0647d4828\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:09.779821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.767481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-585806\" ","response":"range_response_count:1 size:4605"}
	{"level":"info","ts":"2024-03-14T00:58:09.779985Z","caller":"traceutil/trace.go:171","msg":"trace[1197537899] range","detail":"{range_begin:/registry/minions/no-preload-585806; range_end:; response_count:1; response_revision:487; }","duration":"154.834363ms","start":"2024-03-14T00:58:09.625039Z","end":"2024-03-14T00:58:09.779874Z","steps":["trace[1197537899] 'agreement among raft nodes before linearized reading'  (duration: 154.541066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.780123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"664.788437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2024-03-14T00:58:09.780179Z","caller":"traceutil/trace.go:171","msg":"trace[917320323] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:487; }","duration":"664.845462ms","start":"2024-03-14T00:58:09.115324Z","end":"2024-03-14T00:58:09.780169Z","steps":["trace[917320323] 'agreement among raft nodes before linearized reading'  (duration: 664.720836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:09.780212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:09.115296Z","time spent":"664.906193ms","remote":"127.0.0.1:60640","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":864,"request content":"key:\"/registry/clusterroles/system:aggregate-to-admin\" "}
	{"level":"info","ts":"2024-03-14T01:08:05.212167Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2024-03-14T01:08:05.21449Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.933366ms","hash":339514941}
	{"level":"info","ts":"2024-03-14T01:08:05.214548Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":339514941,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T01:13:05.219963Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1038}
	{"level":"info","ts":"2024-03-14T01:13:05.221602Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1038,"took":"1.003759ms","hash":2133909408}
	{"level":"info","ts":"2024-03-14T01:13:05.221957Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2133909408,"revision":1038,"compact-revision":796}
	{"level":"info","ts":"2024-03-14T01:17:48.667806Z","caller":"traceutil/trace.go:171","msg":"trace[1132572403] transaction","detail":"{read_only:false; response_revision:1510; number_of_response:1; }","duration":"100.891865ms","start":"2024-03-14T01:17:48.566864Z","end":"2024-03-14T01:17:48.667756Z","steps":["trace[1132572403] 'process raft request'  (duration: 100.729482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T01:17:48.93732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.53292ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T01:17:48.937409Z","caller":"traceutil/trace.go:171","msg":"trace[800187931] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1510; }","duration":"166.712894ms","start":"2024-03-14T01:17:48.770681Z","end":"2024-03-14T01:17:48.937394Z","steps":["trace[800187931] 'range keys from in-memory index tree'  (duration: 166.356977ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T01:17:48.937977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.707158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T01:17:48.938042Z","caller":"traceutil/trace.go:171","msg":"trace[759135902] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1510; }","duration":"173.879125ms","start":"2024-03-14T01:17:48.764149Z","end":"2024-03-14T01:17:48.938028Z","steps":["trace[759135902] 'range keys from in-memory index tree'  (duration: 173.621118ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T01:18:05.230125Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1281}
	{"level":"info","ts":"2024-03-14T01:18:05.231378Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1281,"took":"1.012935ms","hash":3390537847}
	{"level":"info","ts":"2024-03-14T01:18:05.231441Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3390537847,"revision":1281,"compact-revision":1038}
	
	
	==> kernel <==
	 01:18:21 up 20 min,  0 users,  load average: 0.31, 0.15, 0.10
	Linux no-preload-585806 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] <==
	I0314 01:13:07.756342       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:14:07.755664       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:14:07.755814       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:14:07.755826       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:14:07.757066       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:14:07.757118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:14:07.757131       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:16:07.756328       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:16:07.756688       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:16:07.756724       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:16:07.757493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:16:07.757557       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:16:07.758743       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:18:06.758863       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:06.759094       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0314 01:18:07.760101       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:07.760174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:18:07.760187       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:18:07.760115       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:07.760264       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:18:07.761323       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] <==
	I0314 01:12:51.919783       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:13:21.459721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:13:21.929226       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:13:51.465515       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:13:51.937203       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:14:21.470545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:21.945633       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:14:42.317341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="238.763µs"
	E0314 01:14:51.475795       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:51.956448       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:14:56.316427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="159.732µs"
	E0314 01:15:21.481129       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:21.966491       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:15:51.486854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:51.977066       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:21.493294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:21.984607       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:51.499460       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:51.992744       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:21.506439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:22.001330       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:51.513136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:52.011678       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:18:21.519052       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:18:22.021004       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] <==
	I0314 00:58:09.613353       1 server_others.go:72] "Using iptables proxy"
	I0314 00:58:09.783408       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	I0314 00:58:09.831983       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0314 00:58:09.832012       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:58:09.832028       1 server_others.go:168] "Using iptables Proxier"
	I0314 00:58:09.836006       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:58:09.836385       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0314 00:58:09.836432       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:09.837591       1 config.go:188] "Starting service config controller"
	I0314 00:58:09.837677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:58:09.838011       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:58:09.838066       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:58:09.838724       1 config.go:315] "Starting node config controller"
	I0314 00:58:09.845834       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:58:09.845879       1 shared_informer.go:318] Caches are synced for node config
	I0314 00:58:09.939146       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:58:09.939538       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] <==
	I0314 00:58:04.272869       1 serving.go:380] Generated self-signed cert in-memory
	W0314 00:58:06.721345       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:58:06.721448       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0314 00:58:06.721477       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:58:06.721501       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:58:06.750451       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 00:58:06.750571       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:06.752277       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:58:06.752392       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:58:06.753236       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:58:06.753293       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:58:06.853008       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:16:02 no-preload-585806 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:16:02 no-preload-585806 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:16:02 no-preload-585806 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:16:02 no-preload-585806 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:16:11 no-preload-585806 kubelet[1328]: E0314 01:16:11.297296    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:16:24 no-preload-585806 kubelet[1328]: E0314 01:16:24.297699    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:16:37 no-preload-585806 kubelet[1328]: E0314 01:16:37.297931    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:16:51 no-preload-585806 kubelet[1328]: E0314 01:16:51.297947    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]: E0314 01:17:02.298651    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]: E0314 01:17:02.331572    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:17:02 no-preload-585806 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:17:16 no-preload-585806 kubelet[1328]: E0314 01:17:16.298115    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:17:29 no-preload-585806 kubelet[1328]: E0314 01:17:29.297338    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:17:42 no-preload-585806 kubelet[1328]: E0314 01:17:42.302483    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:17:54 no-preload-585806 kubelet[1328]: E0314 01:17:54.302037    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:18:02 no-preload-585806 kubelet[1328]: E0314 01:18:02.323294    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:18:02 no-preload-585806 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:18:02 no-preload-585806 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:18:02 no-preload-585806 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:18:02 no-preload-585806 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:18:06 no-preload-585806 kubelet[1328]: E0314 01:18:06.298023    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	Mar 14 01:18:19 no-preload-585806 kubelet[1328]: E0314 01:18:19.296929    1328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7pzll" podUID="84952403-8cff-4fa3-b7ef-d98ab0edf7a8"
	
	
	==> storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] <==
	I0314 00:58:09.580403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:58:39.582611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] <==
	I0314 00:58:40.613396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:58:40.627695       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:58:40.627755       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:58:40.640781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:58:40.641063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002!
	I0314 00:58:40.645742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4035f95b-5bbe-4852-a5ce-adc15b7d357d", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002 became leader
	I0314 00:58:40.741543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-585806_aac8273f-560e-4935-b9f5-770c1e6a7002!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-585806 -n no-preload-585806
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-585806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7pzll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll: exit status 1 (66.710044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7pzll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-585806 describe pod metrics-server-57f55c9bc5-7pzll: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (404.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (463.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:19:29.304911101 +0000 UTC m=+6797.374672595
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-652215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.539µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-652215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-652215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-652215 logs -n 25: (1.2668155s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:17 UTC |
	| start   | -p newest-cni-970859 --memory=2200 --alsologtostderr   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-970859             | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	| addons  | enable dashboard -p newest-cni-970859                  | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-970859 --memory=2200 --alsologtostderr   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	| image   | newest-cni-970859 image list                           | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:19 UTC | 14 Mar 24 01:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:19 UTC | 14 Mar 24 01:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:19 UTC | 14 Mar 24 01:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:19 UTC | 14 Mar 24 01:19 UTC |
	| delete  | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:19 UTC | 14 Mar 24 01:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 01:18:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 01:18:25.065242   71999 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:18:25.065505   71999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:18:25.065515   71999 out.go:304] Setting ErrFile to fd 2...
	I0314 01:18:25.065520   71999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:18:25.065710   71999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 01:18:25.066243   71999 out.go:298] Setting JSON to false
	I0314 01:18:25.067154   71999 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7248,"bootTime":1710371857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 01:18:25.067221   71999 start.go:139] virtualization: kvm guest
	I0314 01:18:25.069709   71999 out.go:177] * [newest-cni-970859] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 01:18:25.071461   71999 notify.go:220] Checking for updates...
	I0314 01:18:25.071476   71999 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:18:25.072963   71999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:18:25.074288   71999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:18:25.076578   71999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 01:18:25.077984   71999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 01:18:25.079313   71999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:18:25.080973   71999 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:25.081371   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.081439   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.096681   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0314 01:18:25.097075   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.097568   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.097584   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.097874   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.098076   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.098321   71999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:18:25.098702   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.098744   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.113739   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0314 01:18:25.114133   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.114622   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.114648   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.115011   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.115198   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.150933   71999 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 01:18:25.152364   71999 start.go:297] selected driver: kvm2
	I0314 01:18:25.152376   71999 start.go:901] validating driver "kvm2" against &{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:18:25.152499   71999 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:18:25.153150   71999 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:18:25.153229   71999 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 01:18:25.168141   71999 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 01:18:25.168514   71999 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 01:18:25.168605   71999 cni.go:84] Creating CNI manager for ""
	I0314 01:18:25.168621   71999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:18:25.168677   71999 start.go:340] cluster config:
	{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:18:25.168801   71999 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:18:25.170725   71999 out.go:177] * Starting "newest-cni-970859" primary control-plane node in "newest-cni-970859" cluster
	I0314 01:18:25.172241   71999 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:18:25.172301   71999 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 01:18:25.172315   71999 cache.go:56] Caching tarball of preloaded images
	I0314 01:18:25.172390   71999 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 01:18:25.172405   71999 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 01:18:25.172537   71999 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:18:25.172745   71999 start.go:360] acquireMachinesLock for newest-cni-970859: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 01:18:25.172793   71999 start.go:364] duration metric: took 27.491µs to acquireMachinesLock for "newest-cni-970859"
	I0314 01:18:25.172811   71999 start.go:96] Skipping create...Using existing machine configuration
	I0314 01:18:25.172821   71999 fix.go:54] fixHost starting: 
	I0314 01:18:25.173100   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.173133   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.186900   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0314 01:18:25.187339   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.187819   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.187845   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.188202   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.188397   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.188555   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:25.190121   71999 fix.go:112] recreateIfNeeded on newest-cni-970859: state=Stopped err=<nil>
	I0314 01:18:25.190150   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	W0314 01:18:25.190302   71999 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 01:18:25.192134   71999 out.go:177] * Restarting existing kvm2 VM for "newest-cni-970859" ...
	I0314 01:18:25.193510   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Start
	I0314 01:18:25.193669   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring networks are active...
	I0314 01:18:25.194428   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring network default is active
	I0314 01:18:25.194809   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring network mk-newest-cni-970859 is active
	I0314 01:18:25.195263   71999 main.go:141] libmachine: (newest-cni-970859) Getting domain xml...
	I0314 01:18:25.195985   71999 main.go:141] libmachine: (newest-cni-970859) Creating domain...
	I0314 01:18:26.418558   71999 main.go:141] libmachine: (newest-cni-970859) Waiting to get IP...
	I0314 01:18:26.419739   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:26.420270   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:26.420355   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:26.420229   72034 retry.go:31] will retry after 304.875728ms: waiting for machine to come up
	I0314 01:18:26.726959   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:26.727553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:26.727580   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:26.727490   72034 retry.go:31] will retry after 384.820012ms: waiting for machine to come up
	I0314 01:18:27.114235   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:27.114701   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:27.114729   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:27.114656   72034 retry.go:31] will retry after 331.434823ms: waiting for machine to come up
	I0314 01:18:27.448203   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:27.448756   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:27.448786   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:27.448697   72034 retry.go:31] will retry after 564.139954ms: waiting for machine to come up
	I0314 01:18:28.014521   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:28.015001   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:28.015035   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:28.014981   72034 retry.go:31] will retry after 510.516518ms: waiting for machine to come up
	I0314 01:18:28.526652   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:28.527127   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:28.527158   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:28.527075   72034 retry.go:31] will retry after 777.320743ms: waiting for machine to come up
	I0314 01:18:29.306005   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:29.306439   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:29.306463   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:29.306392   72034 retry.go:31] will retry after 944.794907ms: waiting for machine to come up
	I0314 01:18:30.252501   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:30.253080   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:30.253110   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:30.253013   72034 retry.go:31] will retry after 1.254518848s: waiting for machine to come up
	I0314 01:18:31.509453   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:31.509952   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:31.509982   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:31.509892   72034 retry.go:31] will retry after 1.557179543s: waiting for machine to come up
	I0314 01:18:33.068147   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:33.068639   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:33.068663   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:33.068606   72034 retry.go:31] will retry after 2.280451267s: waiting for machine to come up
	I0314 01:18:35.351149   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:35.351617   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:35.351645   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:35.351551   72034 retry.go:31] will retry after 2.74915389s: waiting for machine to come up
	I0314 01:18:38.103880   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:38.104372   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:38.104392   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:38.104329   72034 retry.go:31] will retry after 2.335472812s: waiting for machine to come up
	I0314 01:18:40.441227   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:40.441593   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:40.441632   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:40.441551   72034 retry.go:31] will retry after 3.28153208s: waiting for machine to come up
	I0314 01:18:43.724560   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.725062   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has current primary IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.725098   71999 main.go:141] libmachine: (newest-cni-970859) Found IP for machine: 192.168.72.249
	I0314 01:18:43.725109   71999 main.go:141] libmachine: (newest-cni-970859) Reserving static IP address...
	I0314 01:18:43.725569   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "newest-cni-970859", mac: "52:54:00:75:c3:8f", ip: "192.168.72.249"} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.725607   71999 main.go:141] libmachine: (newest-cni-970859) Reserved static IP address: 192.168.72.249
	I0314 01:18:43.725632   71999 main.go:141] libmachine: (newest-cni-970859) DBG | skip adding static IP to network mk-newest-cni-970859 - found existing host DHCP lease matching {name: "newest-cni-970859", mac: "52:54:00:75:c3:8f", ip: "192.168.72.249"}
	I0314 01:18:43.725652   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Getting to WaitForSSH function...
	I0314 01:18:43.725663   71999 main.go:141] libmachine: (newest-cni-970859) Waiting for SSH to be available...
	I0314 01:18:43.728108   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.728496   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.728524   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.728661   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH client type: external
	I0314 01:18:43.728688   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa (-rw-------)
	I0314 01:18:43.728729   71999 main.go:141] libmachine: (newest-cni-970859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 01:18:43.728743   71999 main.go:141] libmachine: (newest-cni-970859) DBG | About to run SSH command:
	I0314 01:18:43.728757   71999 main.go:141] libmachine: (newest-cni-970859) DBG | exit 0
	I0314 01:18:43.859120   71999 main.go:141] libmachine: (newest-cni-970859) DBG | SSH cmd err, output: <nil>: 
	I0314 01:18:43.859423   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetConfigRaw
	I0314 01:18:43.860151   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:43.862690   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.863047   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.863075   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.863303   71999 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:18:43.863481   71999 machine.go:94] provisionDockerMachine start ...
	I0314 01:18:43.863500   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:43.863728   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:43.866072   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.866421   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.866447   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.866581   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:43.866774   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.866923   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.867124   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:43.867324   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:43.867558   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:43.867574   71999 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 01:18:43.979541   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 01:18:43.979576   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:43.979821   71999 buildroot.go:166] provisioning hostname "newest-cni-970859"
	I0314 01:18:43.979851   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:43.980030   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:43.982684   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.983092   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.983132   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.983263   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:43.983437   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.983586   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.983754   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:43.983934   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:43.984110   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:43.984132   71999 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-970859 && echo "newest-cni-970859" | sudo tee /etc/hostname
	I0314 01:18:44.110336   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-970859
	
	I0314 01:18:44.110367   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.113185   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.113554   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.113597   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.113812   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.114048   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.114196   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.114360   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.114545   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:44.114730   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:44.114747   71999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-970859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-970859/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-970859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 01:18:44.237686   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:18:44.237724   71999 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 01:18:44.237788   71999 buildroot.go:174] setting up certificates
	I0314 01:18:44.237845   71999 provision.go:84] configureAuth start
	I0314 01:18:44.237864   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:44.238148   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:44.240546   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.240963   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.241000   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.241127   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.243553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.243943   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.243981   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.244123   71999 provision.go:143] copyHostCerts
	I0314 01:18:44.244193   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 01:18:44.244210   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 01:18:44.244317   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 01:18:44.244463   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 01:18:44.244476   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 01:18:44.244523   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 01:18:44.244632   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 01:18:44.244645   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 01:18:44.244684   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 01:18:44.244785   71999 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.newest-cni-970859 san=[127.0.0.1 192.168.72.249 localhost minikube newest-cni-970859]
	I0314 01:18:44.443331   71999 provision.go:177] copyRemoteCerts
	I0314 01:18:44.443385   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 01:18:44.443413   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.446221   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.446601   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.446631   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.446830   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.447004   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.447141   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.447265   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:44.537349   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 01:18:44.562237   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 01:18:44.587253   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 01:18:44.612150   71999 provision.go:87] duration metric: took 374.287634ms to configureAuth
	I0314 01:18:44.612177   71999 buildroot.go:189] setting minikube options for container-runtime
	I0314 01:18:44.612385   71999 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:44.612486   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.615221   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.615572   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.615599   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.615828   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.616010   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.616164   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.616291   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.616442   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:44.616637   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:44.616661   71999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 01:18:44.902142   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 01:18:44.902171   71999 machine.go:97] duration metric: took 1.038676999s to provisionDockerMachine
	I0314 01:18:44.902183   71999 start.go:293] postStartSetup for "newest-cni-970859" (driver="kvm2")
	I0314 01:18:44.902195   71999 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 01:18:44.902216   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:44.902563   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 01:18:44.902584   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.905097   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.905519   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.905553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.905712   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.905930   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.906090   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.906296   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:44.994738   71999 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 01:18:44.999290   71999 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 01:18:44.999315   71999 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 01:18:44.999389   71999 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 01:18:44.999491   71999 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 01:18:44.999604   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 01:18:45.010035   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 01:18:45.035322   71999 start.go:296] duration metric: took 133.125614ms for postStartSetup
	I0314 01:18:45.035360   71999 fix.go:56] duration metric: took 19.862539441s for fixHost
	I0314 01:18:45.035379   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.038142   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.038497   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.038526   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.038664   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.038867   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.039025   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.039150   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.039298   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:45.039495   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:45.039511   71999 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 01:18:45.151485   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710379125.122499406
	
	I0314 01:18:45.151511   71999 fix.go:216] guest clock: 1710379125.122499406
	I0314 01:18:45.151520   71999 fix.go:229] Guest: 2024-03-14 01:18:45.122499406 +0000 UTC Remote: 2024-03-14 01:18:45.03536377 +0000 UTC m=+20.019852437 (delta=87.135636ms)
	I0314 01:18:45.151543   71999 fix.go:200] guest clock delta is within tolerance: 87.135636ms
	I0314 01:18:45.151550   71999 start.go:83] releasing machines lock for "newest-cni-970859", held for 19.978746044s
	I0314 01:18:45.151574   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.151883   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:45.154525   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.154940   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.154969   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.155100   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155597   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155783   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155881   71999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 01:18:45.155926   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.155979   71999 ssh_runner.go:195] Run: cat /version.json
	I0314 01:18:45.155999   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.158646   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.158933   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159028   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.159057   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159180   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.159289   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.159317   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159341   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.159487   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.159492   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.159663   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.159673   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:45.159817   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.159912   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:45.244655   71999 ssh_runner.go:195] Run: systemctl --version
	I0314 01:18:45.281933   71999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 01:18:45.426282   71999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 01:18:45.433143   71999 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 01:18:45.433195   71999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 01:18:45.450560   71999 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 01:18:45.450585   71999 start.go:494] detecting cgroup driver to use...
	I0314 01:18:45.450637   71999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 01:18:45.468128   71999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 01:18:45.483378   71999 docker.go:217] disabling cri-docker service (if available) ...
	I0314 01:18:45.483434   71999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 01:18:45.498259   71999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 01:18:45.513120   71999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 01:18:45.638461   71999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 01:18:45.821284   71999 docker.go:233] disabling docker service ...
	I0314 01:18:45.821360   71999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 01:18:45.837972   71999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 01:18:45.853391   71999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 01:18:45.989964   71999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 01:18:46.118345   71999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 01:18:46.134351   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 01:18:46.154640   71999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 01:18:46.154694   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.166202   71999 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 01:18:46.166263   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.177611   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.191043   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.203918   71999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 01:18:46.216038   71999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 01:18:46.226398   71999 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 01:18:46.226450   71999 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 01:18:46.243188   71999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 01:18:46.253945   71999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:18:46.374419   71999 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 01:18:46.514012   71999 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 01:18:46.514093   71999 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 01:18:46.519482   71999 start.go:562] Will wait 60s for crictl version
	I0314 01:18:46.519533   71999 ssh_runner.go:195] Run: which crictl
	I0314 01:18:46.523839   71999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 01:18:46.562327   71999 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 01:18:46.562419   71999 ssh_runner.go:195] Run: crio --version
	I0314 01:18:46.592362   71999 ssh_runner.go:195] Run: crio --version
	I0314 01:18:46.625963   71999 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 01:18:46.627499   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:46.630405   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:46.630805   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:46.630834   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:46.631085   71999 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 01:18:46.636444   71999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:18:46.652712   71999 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0314 01:18:46.654082   71999 kubeadm.go:877] updating cluster {Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 01:18:46.654210   71999 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:18:46.654283   71999 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:18:46.693475   71999 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 01:18:46.693559   71999 ssh_runner.go:195] Run: which lz4
	I0314 01:18:46.697846   71999 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 01:18:46.702309   71999 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 01:18:46.702339   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0314 01:18:48.271029   71999 crio.go:444] duration metric: took 1.57321697s to copy over tarball
	I0314 01:18:48.271105   71999 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 01:18:50.746969   71999 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.475826282s)
	I0314 01:18:50.746999   71999 crio.go:451] duration metric: took 2.475936871s to extract the tarball
	I0314 01:18:50.747008   71999 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 01:18:50.787923   71999 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:18:50.843323   71999 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 01:18:50.843350   71999 cache_images.go:84] Images are preloaded, skipping loading
	I0314 01:18:50.843359   71999 kubeadm.go:928] updating node { 192.168.72.249 8443 v1.29.0-rc.2 crio true true} ...
	I0314 01:18:50.843497   71999 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-970859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 01:18:50.843594   71999 ssh_runner.go:195] Run: crio config
	I0314 01:18:50.894152   71999 cni.go:84] Creating CNI manager for ""
	I0314 01:18:50.894180   71999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:18:50.894197   71999 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0314 01:18:50.894228   71999 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.249 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-970859 NodeName:newest-cni-970859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 01:18:50.894427   71999 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-970859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 01:18:50.894499   71999 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 01:18:50.905730   71999 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 01:18:50.905809   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 01:18:50.915542   71999 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0314 01:18:50.934612   71999 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 01:18:50.956522   71999 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0314 01:18:50.977590   71999 ssh_runner.go:195] Run: grep 192.168.72.249	control-plane.minikube.internal$ /etc/hosts
	I0314 01:18:50.981823   71999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:18:50.997016   71999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:18:51.137522   71999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:18:51.155880   71999 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859 for IP: 192.168.72.249
	I0314 01:18:51.155901   71999 certs.go:194] generating shared ca certs ...
	I0314 01:18:51.155915   71999 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:51.156062   71999 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 01:18:51.156100   71999 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 01:18:51.156109   71999 certs.go:256] generating profile certs ...
	I0314 01:18:51.156200   71999 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/client.key
	I0314 01:18:51.156296   71999 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key.b2d72356
	I0314 01:18:51.156349   71999 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key
	I0314 01:18:51.156504   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 01:18:51.156548   71999 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 01:18:51.156559   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 01:18:51.156589   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 01:18:51.156637   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 01:18:51.156667   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 01:18:51.156727   71999 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 01:18:51.157477   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 01:18:51.201791   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 01:18:51.241354   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 01:18:51.289018   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 01:18:51.320359   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 01:18:51.352601   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 01:18:51.380952   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 01:18:51.410238   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 01:18:51.439733   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 01:18:51.472177   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 01:18:51.499689   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 01:18:51.526010   71999 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 01:18:51.545280   71999 ssh_runner.go:195] Run: openssl version
	I0314 01:18:51.551577   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 01:18:51.563444   71999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 01:18:51.569479   71999 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 01:18:51.569548   71999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 01:18:51.575860   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 01:18:51.589497   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 01:18:51.601394   71999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:18:51.606604   71999 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:18:51.606654   71999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:18:51.612677   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 01:18:51.624880   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 01:18:51.636792   71999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 01:18:51.641542   71999 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 01:18:51.641603   71999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 01:18:51.647570   71999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 01:18:51.659503   71999 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 01:18:51.664606   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 01:18:51.671393   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 01:18:51.677885   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 01:18:51.684463   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 01:18:51.690826   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 01:18:51.697266   71999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 01:18:51.703529   71999 kubeadm.go:391] StartCluster: {Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:18:51.703652   71999 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 01:18:51.703691   71999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 01:18:51.748495   71999 cri.go:89] found id: ""
	I0314 01:18:51.748559   71999 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 01:18:51.759860   71999 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 01:18:51.759881   71999 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 01:18:51.759887   71999 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 01:18:51.759928   71999 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 01:18:51.771062   71999 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 01:18:51.771947   71999 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-970859" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:18:51.772429   71999 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-970859" cluster setting kubeconfig missing "newest-cni-970859" context setting]
	I0314 01:18:51.773293   71999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:51.783012   71999 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 01:18:51.796787   71999 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.249
	I0314 01:18:51.796823   71999 kubeadm.go:1153] stopping kube-system containers ...
	I0314 01:18:51.796837   71999 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 01:18:51.796894   71999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 01:18:51.841291   71999 cri.go:89] found id: ""
	I0314 01:18:51.841363   71999 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 01:18:51.860316   71999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:18:51.872646   71999 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:18:51.872669   71999 kubeadm.go:156] found existing configuration files:
	
	I0314 01:18:51.872724   71999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:18:51.883414   71999 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:18:51.883475   71999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:18:51.894424   71999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:18:51.904911   71999 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:18:51.904982   71999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:18:51.915064   71999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:18:51.924794   71999 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:18:51.924862   71999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:18:51.934720   71999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:18:51.944240   71999 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:18:51.944300   71999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:18:51.956544   71999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:18:51.967076   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:52.106050   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:53.517726   71999 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.411636413s)
	I0314 01:18:53.517760   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:53.738713   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:53.827691   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:53.953611   71999 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:18:53.953704   71999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:18:54.454668   71999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:18:54.953964   71999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:18:55.004343   71999 api_server.go:72] duration metric: took 1.050735767s to wait for apiserver process to appear ...
	I0314 01:18:55.004370   71999 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:18:55.004390   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:55.004816   71999 api_server.go:269] stopped: https://192.168.72.249:8443/healthz: Get "https://192.168.72.249:8443/healthz": dial tcp 192.168.72.249:8443: connect: connection refused
	I0314 01:18:55.505234   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:57.897239   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 01:18:57.897273   71999 api_server.go:103] status: https://192.168.72.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 01:18:57.897289   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:57.934493   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 01:18:57.934532   71999 api_server.go:103] status: https://192.168.72.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 01:18:58.004711   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:58.013278   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 01:18:58.013305   71999 api_server.go:103] status: https://192.168.72.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 01:18:58.504864   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:58.509385   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 01:18:58.509412   71999 api_server.go:103] status: https://192.168.72.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 01:18:59.004966   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:59.015590   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 01:18:59.015619   71999 api_server.go:103] status: https://192.168.72.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 01:18:59.505222   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:18:59.511002   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 200:
	ok
	I0314 01:18:59.520891   71999 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:18:59.520928   71999 api_server.go:131] duration metric: took 4.516547038s to wait for apiserver health ...
	I0314 01:18:59.520939   71999 cni.go:84] Creating CNI manager for ""
	I0314 01:18:59.520947   71999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:18:59.522430   71999 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 01:18:59.523658   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 01:18:59.563944   71999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 01:18:59.614233   71999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:18:59.626490   71999 system_pods.go:59] 8 kube-system pods found
	I0314 01:18:59.626523   71999 system_pods.go:61] "coredns-76f75df574-kdg7j" [85edaa1f-af91-478d-902b-9e128799b04d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 01:18:59.626533   71999 system_pods.go:61] "etcd-newest-cni-970859" [0c78288d-afc9-4ea5-90a6-2d6ac021a743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 01:18:59.626544   71999 system_pods.go:61] "kube-apiserver-newest-cni-970859" [8fac6b39-ce44-466c-a2e6-a7eac41c65be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 01:18:59.626552   71999 system_pods.go:61] "kube-controller-manager-newest-cni-970859" [231f4ead-1cc9-4e25-b8e9-6444bc103e24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 01:18:59.626561   71999 system_pods.go:61] "kube-proxy-hpk8q" [b139648f-e89d-4f2e-be3b-445dec5997dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 01:18:59.626573   71999 system_pods.go:61] "kube-scheduler-newest-cni-970859" [94d0802f-2581-4ac5-93ca-63dd410a95c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 01:18:59.626579   71999 system_pods.go:61] "metrics-server-57f55c9bc5-bmzdq" [998df91d-9396-44cc-b918-cb218b1cd2f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:18:59.626585   71999 system_pods.go:61] "storage-provisioner" [25ba16d4-5e06-4666-b312-af1619365c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 01:18:59.626597   71999 system_pods.go:74] duration metric: took 12.342667ms to wait for pod list to return data ...
	I0314 01:18:59.626603   71999 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:18:59.631683   71999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:18:59.631719   71999 node_conditions.go:123] node cpu capacity is 2
	I0314 01:18:59.631733   71999 node_conditions.go:105] duration metric: took 5.122766ms to run NodePressure ...
	I0314 01:18:59.631755   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 01:18:59.931424   71999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 01:18:59.943719   71999 ops.go:34] apiserver oom_adj: -16
	I0314 01:18:59.943739   71999 kubeadm.go:591] duration metric: took 8.183848057s to restartPrimaryControlPlane
	I0314 01:18:59.943749   71999 kubeadm.go:393] duration metric: took 8.240231861s to StartCluster
	I0314 01:18:59.943764   71999 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:59.943865   71999 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:18:59.944640   71999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:18:59.944869   71999 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 01:18:59.946652   71999 out.go:177] * Verifying Kubernetes components...
	I0314 01:18:59.944922   71999 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 01:18:59.945081   71999 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:59.948009   71999 addons.go:69] Setting default-storageclass=true in profile "newest-cni-970859"
	I0314 01:18:59.948016   71999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:18:59.948017   71999 addons.go:69] Setting metrics-server=true in profile "newest-cni-970859"
	I0314 01:18:59.948038   71999 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-970859"
	I0314 01:18:59.948048   71999 addons.go:234] Setting addon metrics-server=true in "newest-cni-970859"
	W0314 01:18:59.948056   71999 addons.go:243] addon metrics-server should already be in state true
	I0314 01:18:59.948010   71999 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-970859"
	I0314 01:18:59.948106   71999 addons.go:69] Setting dashboard=true in profile "newest-cni-970859"
	I0314 01:18:59.948125   71999 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-970859"
	W0314 01:18:59.948138   71999 addons.go:243] addon storage-provisioner should already be in state true
	I0314 01:18:59.948145   71999 addons.go:234] Setting addon dashboard=true in "newest-cni-970859"
	W0314 01:18:59.948156   71999 addons.go:243] addon dashboard should already be in state true
	I0314 01:18:59.948170   71999 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:59.948088   71999 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:59.948204   71999 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:59.948425   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.948457   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.948537   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.948567   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.948568   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.948599   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.948707   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.948732   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.964308   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0314 01:18:59.964736   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.965353   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.965373   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.965786   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.966044   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:59.968023   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I0314 01:18:59.968196   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0314 01:18:59.968398   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0314 01:18:59.968408   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.968596   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.968715   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.969042   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.969058   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.969151   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.969161   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.969235   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.969258   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.969443   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.969452   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.969603   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.969608   71999 addons.go:234] Setting addon default-storageclass=true in "newest-cni-970859"
	W0314 01:18:59.969630   71999 addons.go:243] addon default-storageclass should already be in state true
	I0314 01:18:59.969658   71999 host.go:66] Checking if "newest-cni-970859" exists ...
	I0314 01:18:59.969925   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.969947   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.969946   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.969963   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.970083   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.970124   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.970427   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.970456   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.984967   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0314 01:18:59.985443   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.985517   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0314 01:18:59.985844   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.985959   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.985973   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.986290   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.986309   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.986293   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.986509   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:59.986838   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.987053   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:59.988609   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0314 01:18:59.988715   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:59.989094   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.991011   71999 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0314 01:18:59.989324   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:59.989469   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.991242   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0314 01:18:59.992442   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.993856   71999 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0314 01:18:59.992864   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.992997   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:59.995278   71999 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 01:18:59.996658   71999 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:18:59.996674   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 01:18:59.996690   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:59.995305   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0314 01:18:59.996722   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0314 01:18:59.996745   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:59.995724   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:59.996816   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:59.995895   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:59.996888   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:59.997701   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:59.998245   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:19:00.001197   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:19:00.001438   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.003020   71999 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 01:19:00.001660   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.001902   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:19:00.002302   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:19:00.002364   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:19:00.004324   71999 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 01:19:00.004345   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 01:19:00.004349   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:19:00.004365   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:19:00.004374   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.004390   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.004469   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:19:00.004522   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:19:00.004635   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:19:00.004688   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:19:00.004788   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:19:00.004861   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:19:00.007575   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.008052   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:19:00.008066   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.008287   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:19:00.008477   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:19:00.008639   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:19:00.008790   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:19:00.015095   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0314 01:19:00.015439   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:19:00.015863   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:19:00.015880   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:19:00.016150   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:19:00.016344   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:19:00.017902   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:19:00.018143   71999 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 01:19:00.018157   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 01:19:00.018168   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:19:00.020995   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.021449   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:19:00.021480   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:19:00.021626   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:19:00.021786   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:19:00.021929   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:19:00.022079   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:19:00.212526   71999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:19:00.234831   71999 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:19:00.234920   71999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:19:00.248786   71999 api_server.go:72] duration metric: took 303.885445ms to wait for apiserver process to appear ...
	I0314 01:19:00.248809   71999 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:19:00.248829   71999 api_server.go:253] Checking apiserver healthz at https://192.168.72.249:8443/healthz ...
	I0314 01:19:00.254022   71999 api_server.go:279] https://192.168.72.249:8443/healthz returned 200:
	ok
	I0314 01:19:00.255286   71999 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:19:00.255310   71999 api_server.go:131] duration metric: took 6.493766ms to wait for apiserver health ...
	I0314 01:19:00.255320   71999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:19:00.261836   71999 system_pods.go:59] 8 kube-system pods found
	I0314 01:19:00.261864   71999 system_pods.go:61] "coredns-76f75df574-kdg7j" [85edaa1f-af91-478d-902b-9e128799b04d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 01:19:00.261877   71999 system_pods.go:61] "etcd-newest-cni-970859" [0c78288d-afc9-4ea5-90a6-2d6ac021a743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 01:19:00.261888   71999 system_pods.go:61] "kube-apiserver-newest-cni-970859" [8fac6b39-ce44-466c-a2e6-a7eac41c65be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 01:19:00.261898   71999 system_pods.go:61] "kube-controller-manager-newest-cni-970859" [231f4ead-1cc9-4e25-b8e9-6444bc103e24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 01:19:00.261909   71999 system_pods.go:61] "kube-proxy-hpk8q" [b139648f-e89d-4f2e-be3b-445dec5997dd] Running
	I0314 01:19:00.261916   71999 system_pods.go:61] "kube-scheduler-newest-cni-970859" [94d0802f-2581-4ac5-93ca-63dd410a95c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 01:19:00.261933   71999 system_pods.go:61] "metrics-server-57f55c9bc5-bmzdq" [998df91d-9396-44cc-b918-cb218b1cd2f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:19:00.261938   71999 system_pods.go:61] "storage-provisioner" [25ba16d4-5e06-4666-b312-af1619365c85] Running
	I0314 01:19:00.261946   71999 system_pods.go:74] duration metric: took 6.619452ms to wait for pod list to return data ...
	I0314 01:19:00.261959   71999 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:19:00.264531   71999 default_sa.go:45] found service account: "default"
	I0314 01:19:00.264554   71999 default_sa.go:55] duration metric: took 2.584414ms for default service account to be created ...
	I0314 01:19:00.264566   71999 kubeadm.go:576] duration metric: took 319.67046ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 01:19:00.264583   71999 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:19:00.268141   71999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:19:00.268159   71999 node_conditions.go:123] node cpu capacity is 2
	I0314 01:19:00.268168   71999 node_conditions.go:105] duration metric: took 3.580499ms to run NodePressure ...
	I0314 01:19:00.268183   71999 start.go:240] waiting for startup goroutines ...
	I0314 01:19:00.289635   71999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:19:00.340281   71999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:19:00.360430   71999 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 01:19:00.360449   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 01:19:00.394380   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0314 01:19:00.394400   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0314 01:19:00.406808   71999 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 01:19:00.406834   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 01:19:00.466936   71999 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:19:00.466961   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 01:19:00.476265   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0314 01:19:00.476290   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0314 01:19:00.508923   71999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:19:00.552777   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0314 01:19:00.552809   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0314 01:19:00.613427   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0314 01:19:00.613455   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0314 01:19:00.685713   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0314 01:19:00.685739   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0314 01:19:00.748970   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0314 01:19:00.749004   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0314 01:19:00.775423   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0314 01:19:00.775448   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0314 01:19:00.833038   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0314 01:19:00.833074   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0314 01:19:00.879261   71999 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:19:00.879292   71999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0314 01:19:00.925515   71999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:19:01.485252   71999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195588302s)
	I0314 01:19:01.485338   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.485340   71999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.145012628s)
	I0314 01:19:01.485356   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.485376   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.485388   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.485850   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:01.485864   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.485873   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.485890   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.485902   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.485911   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.485877   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.485999   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.486007   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.486198   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:01.486221   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.486252   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:01.486260   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.487660   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.487688   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.494670   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.494690   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.494960   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.494986   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.494990   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:01.641270   71999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.132303625s)
	I0314 01:19:01.641331   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.641345   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.641713   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:01.641724   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.641747   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.641763   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:01.641777   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:01.641992   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:01.642007   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:01.642018   71999 addons.go:470] Verifying addon metrics-server=true in "newest-cni-970859"
	I0314 01:19:01.642028   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:02.033327   71999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.10775858s)
	I0314 01:19:02.033383   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:02.033398   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:02.033702   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:02.033719   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:02.033727   71999 main.go:141] libmachine: Making call to close driver server
	I0314 01:19:02.033760   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Close
	I0314 01:19:02.034022   71999 main.go:141] libmachine: Successfully made call to close driver server
	I0314 01:19:02.034039   71999 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 01:19:02.034066   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Closing plugin on server side
	I0314 01:19:02.035775   71999 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-970859 addons enable metrics-server
	
	I0314 01:19:02.037253   71999 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0314 01:19:02.038534   71999 addons.go:505] duration metric: took 2.09361169s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0314 01:19:02.038569   71999 start.go:245] waiting for cluster config update ...
	I0314 01:19:02.038583   71999 start.go:254] writing updated cluster config ...
	I0314 01:19:02.038830   71999 ssh_runner.go:195] Run: rm -f paused
	I0314 01:19:02.090487   71999 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:19:02.092301   71999 out.go:177] * Done! kubectl is now configured to use "newest-cni-970859" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 14 01:19:29 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:29.982546136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379169982522763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef535a6d-54b7-474e-a5ad-52c189841791 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:29 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:29.983458826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7e0de36-ada5-4c2c-9483-fd7bc1ad562a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:29 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:29.983528847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7e0de36-ada5-4c2c-9483-fd7bc1ad562a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:29 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:29.983738883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7e0de36-ada5-4c2c-9483-fd7bc1ad562a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.024463977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=201223e0-8307-4238-874a-f0c0e55c6cc6 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.024851404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=201223e0-8307-4238-874a-f0c0e55c6cc6 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.026086499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb22195a-7df9-47e7-bed8-14992712e365 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.026888928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379170026861096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb22195a-7df9-47e7-bed8-14992712e365 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.027614585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0c3148d-3db7-4124-a0cb-67ecf9ac1852 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.027674013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0c3148d-3db7-4124-a0cb-67ecf9ac1852 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.028010737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0c3148d-3db7-4124-a0cb-67ecf9ac1852 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.069650933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0922e6bc-ea03-49db-ae20-fa9bf19acdc5 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.069736106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0922e6bc-ea03-49db-ae20-fa9bf19acdc5 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.070949564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4fe3aad-d6c6-499a-a25f-d7332717b67e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.071676663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379170071646127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4fe3aad-d6c6-499a-a25f-d7332717b67e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.072276027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dd30555-aa66-44ae-8bf4-525eabed3465 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.072324525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dd30555-aa66-44ae-8bf4-525eabed3465 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.072648263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dd30555-aa66-44ae-8bf4-525eabed3465 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.113222442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad3c596b-428f-4c5b-8715-922c3e5b6553 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.113297660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad3c596b-428f-4c5b-8715-922c3e5b6553 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.114914029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e61cd89-7180-4775-a876-9e3afc144b45 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.115704178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379170115668568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e61cd89-7180-4775-a876-9e3afc144b45 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.116581882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af37c89b-e44a-4da6-91dd-c2314d407311 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.116678616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af37c89b-e44a-4da6-91dd-c2314d407311 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:19:30 default-k8s-diff-port-652215 crio[694]: time="2024-03-14 01:19:30.117630741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377926430651898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd38b11caa9ecae61709e36c11ce0a2d55a582db35b9a40c807c77c83ea9ccf0,PodSandboxId:f459af31fbcfc57a6ccf275e114c35c3196dc929a895661dae76965fc767f2ab,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377905570926741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15df755f-762d-4797-8c90-09e96eb32663,},Annotations:map[string]string{io.kubernetes.container.hash: ddbb423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82,PodSandboxId:dc13a072da6f67eedafd74d84c10c85a81cd31eb64df8d50339e03be53212b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377903052682887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cc7x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ab007b-5498-4883-84b9-f034c3095fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 59d5a1f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3,PodSandboxId:5fa711b5045700f701b802a70c8c175c9af1c11c4f2c6b248bb6daf0be97fe5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377895532784689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b70cb5c2-863b-45d4-9363-dd364a240118,},Annotations:map[string]string{io.kubernetes.container.hash: 4c80fcdd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0,PodSandboxId:3c29862f10046257df3c4741db6fa642102eca7ee5f59b4019823b024d41973d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377895517289307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7dwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e793aa69-a2c7-4404-9b74-
ed4ac39cb249,},Annotations:map[string]string{io.kubernetes.container.hash: d410d4fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648,PodSandboxId:d88399beaea4d05e65c3416e790f5f7d51e396ae61e213e70e323075217a3e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377890916987779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3e82cdea584c2c894c286b610c565cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8,PodSandboxId:7e10a1fb88c9c5d0ab1c8f5ec75431ea3309808ead8a210fa440c25afa3c0af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377890932856696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37b4408e1cefe3c5c4e0063552
da25c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c65a79f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b,PodSandboxId:0b6930e6937c38ec072f1d2c07b9e70101900faa20b50745f75131d4f9036509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377890851666917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d91634415537e5502ac9ce085db4
d44,},Annotations:map[string]string{io.kubernetes.container.hash: bcf0e038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef,PodSandboxId:9b9d8f1b30ffc0b4fba311aa38b8f62b07a3c16df78b829a774d23e0c51ebabe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377890822666869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-652215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb12f5c0d3069473d890f54bf58f0c9
9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af37c89b-e44a-4da6-91dd-c2314d407311 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	051f66d3597a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   5fa711b504570       storage-provisioner
	cd38b11caa9ec       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   f459af31fbcfc       busybox
	e87ba9e92390a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   dc13a072da6f6       coredns-5dd5756b68-cc7x2
	5306eb697d68f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   5fa711b504570       storage-provisioner
	08cdc002a4003       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   3c29862f10046       kube-proxy-s7dwp
	2ad67f5626011       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   7e10a1fb88c9c       etcd-default-k8s-diff-port-652215
	fe628f4a1ccd1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   d88399beaea4d       kube-controller-manager-default-k8s-diff-port-652215
	a4ee2cfc6f4e7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   0b6930e6937c3       kube-apiserver-default-k8s-diff-port-652215
	46a128a58b665       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   9b9d8f1b30ffc       kube-scheduler-default-k8s-diff-port-652215
	
	
	==> coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40981 - 59055 "HINFO IN 6156123757758169156.6499896433233568811. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010389216s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-652215
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-652215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=default-k8s-diff-port-652215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_50_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-652215
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:19:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:19:10 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:19:10 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:19:10 +0000   Thu, 14 Mar 2024 00:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:19:10 +0000   Thu, 14 Mar 2024 00:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.7
	  Hostname:    default-k8s-diff-port-652215
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a3cfe97cfe94492b0d86ace3f97a572
	  System UUID:                0a3cfe97-cfe9-4492-b0d8-6ace3f97a572
	  Boot ID:                    42fb8b0e-95b1-411a-afa5-f17310c551d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-cc7x2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-652215                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-652215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-652215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-s7dwp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-652215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-kll8v                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-652215 event: Registered Node default-k8s-diff-port-652215 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-652215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-652215 event: Registered Node default-k8s-diff-port-652215 in Controller
	
	
	==> dmesg <==
	[Mar14 00:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053291] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.592793] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.829169] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.646016] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar14 00:58] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.061012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067401] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.185645] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.163529] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.267405] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.412315] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.063391] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.072703] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.584815] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.516378] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +3.213665] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.456020] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] <==
	{"level":"info","ts":"2024-03-14T00:58:55.5716Z","caller":"traceutil/trace.go:171","msg":"trace[1670733807] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:653; }","duration":"170.217957ms","start":"2024-03-14T00:58:55.401371Z","end":"2024-03-14T00:58:55.571589Z","steps":["trace[1670733807] 'agreement among raft nodes before linearized reading'  (duration: 169.45442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:55.828792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.590401ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011263 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" mod_revision:645 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nvqm6xba4ntaaecslq7rnvskei\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:55.829609Z","caller":"traceutil/trace.go:171","msg":"trace[1473180427] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"174.618874ms","start":"2024-03-14T00:58:55.654923Z","end":"2024-03-14T00:58:55.829542Z","steps":["trace[1473180427] 'process raft request'  (duration: 45.227559ms)","trace[1473180427] 'compare'  (duration: 128.471593ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:56.10999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.720551ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011266 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:42328e3a7775d741>","response":"size:39"}
	{"level":"warn","ts":"2024-03-14T00:58:56.42753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.335779ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993403373957011268 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.7\" mod_revision:647 > success:<request_put:<key:\"/registry/masterleases/192.168.61.7\" value_size:65 lease:4770031337102235457 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.7\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T00:58:56.427767Z","caller":"traceutil/trace.go:171","msg":"trace[1588088470] linearizableReadLoop","detail":"{readStateIndex:703; appliedIndex:702; }","duration":"313.760508ms","start":"2024-03-14T00:58:56.11399Z","end":"2024-03-14T00:58:56.42775Z","steps":["trace[1588088470] 'read index received'  (duration: 109.983415ms)","trace[1588088470] 'applied index is now lower than readState.Index'  (duration: 203.774851ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T00:58:56.427884Z","caller":"traceutil/trace.go:171","msg":"trace[405897399] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"315.239025ms","start":"2024-03-14T00:58:56.112631Z","end":"2024-03-14T00:58:56.42787Z","steps":["trace[405897399] 'process raft request'  (duration: 111.384885ms)","trace[405897399] 'compare'  (duration: 202.709213ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T00:58:56.428011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:56.112616Z","time spent":"315.326788ms","remote":"127.0.0.1:57288","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.7\" mod_revision:647 > success:<request_put:<key:\"/registry/masterleases/192.168.61.7\" value_size:65 lease:4770031337102235457 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.7\" > >"}
	{"level":"warn","ts":"2024-03-14T00:58:56.428116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.136047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-652215\" ","response":"range_response_count:1 size:5800"}
	{"level":"info","ts":"2024-03-14T00:58:56.428171Z","caller":"traceutil/trace.go:171","msg":"trace[495731310] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-652215; range_end:; response_count:1; response_revision:655; }","duration":"314.191952ms","start":"2024-03-14T00:58:56.11397Z","end":"2024-03-14T00:58:56.428162Z","steps":["trace[495731310] 'agreement among raft nodes before linearized reading'  (duration: 313.900456ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T00:58:56.428234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T00:58:56.113961Z","time spent":"314.262882ms","remote":"127.0.0.1:57440","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5822,"request content":"key:\"/registry/minions/default-k8s-diff-port-652215\" "}
	{"level":"info","ts":"2024-03-14T01:08:13.034459Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":876}
	{"level":"info","ts":"2024-03-14T01:08:13.037329Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":876,"took":"2.505017ms","hash":2665122675}
	{"level":"info","ts":"2024-03-14T01:08:13.037437Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2665122675,"revision":876,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T01:13:13.043278Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1119}
	{"level":"info","ts":"2024-03-14T01:13:13.045177Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1119,"took":"1.259459ms","hash":3801304245}
	{"level":"info","ts":"2024-03-14T01:13:13.045217Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3801304245,"revision":1119,"compact-revision":876}
	{"level":"info","ts":"2024-03-14T01:17:31.992024Z","caller":"traceutil/trace.go:171","msg":"trace[1890379204] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"111.928039ms","start":"2024-03-14T01:17:31.880041Z","end":"2024-03-14T01:17:31.991969Z","steps":["trace[1890379204] 'process raft request'  (duration: 111.717825ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T01:17:48.195339Z","caller":"traceutil/trace.go:171","msg":"trace[1431444874] transaction","detail":"{read_only:false; response_revision:1584; number_of_response:1; }","duration":"108.760678ms","start":"2024-03-14T01:17:48.086372Z","end":"2024-03-14T01:17:48.195132Z","steps":["trace[1431444874] 'process raft request'  (duration: 108.607422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T01:17:48.443501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.545066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T01:17:48.443625Z","caller":"traceutil/trace.go:171","msg":"trace[1170859496] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1584; }","duration":"177.858502ms","start":"2024-03-14T01:17:48.265728Z","end":"2024-03-14T01:17:48.443587Z","steps":["trace[1170859496] 'range keys from in-memory index tree'  (duration: 177.428739ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T01:18:13.051699Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1361}
	{"level":"info","ts":"2024-03-14T01:18:13.053497Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1361,"took":"1.488191ms","hash":1604728063}
	{"level":"info","ts":"2024-03-14T01:18:13.053574Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1604728063,"revision":1361,"compact-revision":1119}
	{"level":"info","ts":"2024-03-14T01:18:52.753158Z","caller":"traceutil/trace.go:171","msg":"trace[98608591] transaction","detail":"{read_only:false; response_revision:1638; number_of_response:1; }","duration":"198.240657ms","start":"2024-03-14T01:18:52.554849Z","end":"2024-03-14T01:18:52.753089Z","steps":["trace[98608591] 'process raft request'  (duration: 198.020716ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:19:30 up 21 min,  0 users,  load average: 0.23, 0.19, 0.16
	Linux default-k8s-diff-port-652215 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] <==
	W0314 01:16:15.717093       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:16:15.717194       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:16:15.717204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:17:14.593952       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 01:18:14.593909       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:18:14.718676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:14.718801       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:18:14.719059       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:18:15.719503       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:15.719560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:18:15.719569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:18:15.719650       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:18:15.719727       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:18:15.720883       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:19:14.593668       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:19:15.720308       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:19:15.720512       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:19:15.720549       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:19:15.721475       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:19:15.721577       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:19:15.721593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] <==
	I0314 01:13:58.285108       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:14:27.726150       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:28.293483       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:14:36.195010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="233.663µs"
	I0314 01:14:50.189580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="196.239µs"
	E0314 01:14:57.732898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:58.302679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:15:27.738664       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:28.312793       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:15:57.744071       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:58.322865       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:27.748958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:28.331442       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:57.754851       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:58.339575       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:27.764553       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:28.348806       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:57.771439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:58.359688       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:18:27.776890       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:18:28.367899       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:18:57.783674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:18:58.377697       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:19:27.788819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:19:28.386764       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] <==
	I0314 00:58:15.762979       1 server_others.go:69] "Using iptables proxy"
	I0314 00:58:15.787000       1 node.go:141] Successfully retrieved node IP: 192.168.61.7
	I0314 00:58:15.887131       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:58:15.887150       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:58:15.890177       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:58:15.890214       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:58:15.890345       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:58:15.890353       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:15.891209       1 config.go:188] "Starting service config controller"
	I0314 00:58:15.891256       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:58:15.891278       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:58:15.891281       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:58:15.891778       1 config.go:315] "Starting node config controller"
	I0314 00:58:15.891810       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:58:15.991464       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:58:15.991664       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:58:15.991899       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] <==
	I0314 00:58:12.044288       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:58:14.657600       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:58:14.657710       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:58:14.657721       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:58:14.657727       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:58:14.718170       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:58:14.719438       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:58:14.729581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:58:14.731119       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:58:14.731162       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:58:14.731181       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:58:14.831673       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:17:10 default-k8s-diff-port-652215 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:17:10 default-k8s-diff-port-652215 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:17:21 default-k8s-diff-port-652215 kubelet[910]: E0314 01:17:21.174805     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:17:34 default-k8s-diff-port-652215 kubelet[910]: E0314 01:17:34.177116     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:17:45 default-k8s-diff-port-652215 kubelet[910]: E0314 01:17:45.175084     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:17:59 default-k8s-diff-port-652215 kubelet[910]: E0314 01:17:59.175644     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:18:10 default-k8s-diff-port-652215 kubelet[910]: E0314 01:18:10.198779     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:18:10 default-k8s-diff-port-652215 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:18:10 default-k8s-diff-port-652215 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:18:10 default-k8s-diff-port-652215 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:18:10 default-k8s-diff-port-652215 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:18:14 default-k8s-diff-port-652215 kubelet[910]: E0314 01:18:14.176566     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:18:29 default-k8s-diff-port-652215 kubelet[910]: E0314 01:18:29.175210     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:18:44 default-k8s-diff-port-652215 kubelet[910]: E0314 01:18:44.177872     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:18:57 default-k8s-diff-port-652215 kubelet[910]: E0314 01:18:57.174234     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:19:10 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:10.198538     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:19:10 default-k8s-diff-port-652215 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:19:10 default-k8s-diff-port-652215 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:19:10 default-k8s-diff-port-652215 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:19:10 default-k8s-diff-port-652215 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:19:12 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:12.174120     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	Mar 14 01:19:24 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:24.193254     910 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:19:24 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:24.193740     910 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 01:19:24 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:24.194014     910 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9tq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-kll8v_kube-system(9060285f-ee6f-4d17-a7a6-a5a24f88d80a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 01:19:24 default-k8s-diff-port-652215 kubelet[910]: E0314 01:19:24.194112     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-kll8v" podUID="9060285f-ee6f-4d17-a7a6-a5a24f88d80a"
	
	
	==> storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] <==
	I0314 00:58:46.528674       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:58:46.547307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:58:46.547478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:59:03.950035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:59:03.950277       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059!
	I0314 00:59:03.951276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13b9e68c-06e7-4501-9a93-d635a26c3276", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059 became leader
	I0314 00:59:04.051322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-652215_8a1ed257-7f3b-4fd5-9395-baf25a9fe059!
	
	
	==> storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] <==
	I0314 00:58:15.719816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:58:45.722300       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kll8v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v: exit status 1 (62.964767ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kll8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-652215 describe pod metrics-server-57f55c9bc5-kll8v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (463.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (382.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-164135 -n embed-certs-164135
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 01:18:50.672901156 +0000 UTC m=+6758.742662643
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-164135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-164135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.458µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-164135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-164135 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-164135 logs -n 25: (3.242426989s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:17 UTC |
	| start   | -p newest-cni-970859 --memory=2200 --alsologtostderr   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:17 UTC | 14 Mar 24 01:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-970859             | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-970859                                   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	| addons  | enable dashboard -p newest-cni-970859                  | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC | 14 Mar 24 01:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-970859 --memory=2200 --alsologtostderr   | newest-cni-970859            | jenkins | v1.32.0 | 14 Mar 24 01:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 01:18:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 01:18:25.065242   71999 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:18:25.065505   71999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:18:25.065515   71999 out.go:304] Setting ErrFile to fd 2...
	I0314 01:18:25.065520   71999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:18:25.065710   71999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 01:18:25.066243   71999 out.go:298] Setting JSON to false
	I0314 01:18:25.067154   71999 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7248,"bootTime":1710371857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 01:18:25.067221   71999 start.go:139] virtualization: kvm guest
	I0314 01:18:25.069709   71999 out.go:177] * [newest-cni-970859] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 01:18:25.071461   71999 notify.go:220] Checking for updates...
	I0314 01:18:25.071476   71999 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:18:25.072963   71999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:18:25.074288   71999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 01:18:25.076578   71999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 01:18:25.077984   71999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 01:18:25.079313   71999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:18:25.080973   71999 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:25.081371   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.081439   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.096681   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0314 01:18:25.097075   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.097568   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.097584   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.097874   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.098076   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.098321   71999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:18:25.098702   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.098744   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.113739   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0314 01:18:25.114133   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.114622   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.114648   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.115011   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.115198   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.150933   71999 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 01:18:25.152364   71999 start.go:297] selected driver: kvm2
	I0314 01:18:25.152376   71999 start.go:901] validating driver "kvm2" against &{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:18:25.152499   71999 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:18:25.153150   71999 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:18:25.153229   71999 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 01:18:25.168141   71999 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 01:18:25.168514   71999 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 01:18:25.168605   71999 cni.go:84] Creating CNI manager for ""
	I0314 01:18:25.168621   71999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 01:18:25.168677   71999 start.go:340] cluster config:
	{Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:18:25.168801   71999 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:18:25.170725   71999 out.go:177] * Starting "newest-cni-970859" primary control-plane node in "newest-cni-970859" cluster
	I0314 01:18:25.172241   71999 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:18:25.172301   71999 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 01:18:25.172315   71999 cache.go:56] Caching tarball of preloaded images
	I0314 01:18:25.172390   71999 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 01:18:25.172405   71999 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 01:18:25.172537   71999 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:18:25.172745   71999 start.go:360] acquireMachinesLock for newest-cni-970859: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 01:18:25.172793   71999 start.go:364] duration metric: took 27.491µs to acquireMachinesLock for "newest-cni-970859"
	I0314 01:18:25.172811   71999 start.go:96] Skipping create...Using existing machine configuration
	I0314 01:18:25.172821   71999 fix.go:54] fixHost starting: 
	I0314 01:18:25.173100   71999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 01:18:25.173133   71999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 01:18:25.186900   71999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0314 01:18:25.187339   71999 main.go:141] libmachine: () Calling .GetVersion
	I0314 01:18:25.187819   71999 main.go:141] libmachine: Using API Version  1
	I0314 01:18:25.187845   71999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 01:18:25.188202   71999 main.go:141] libmachine: () Calling .GetMachineName
	I0314 01:18:25.188397   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:25.188555   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetState
	I0314 01:18:25.190121   71999 fix.go:112] recreateIfNeeded on newest-cni-970859: state=Stopped err=<nil>
	I0314 01:18:25.190150   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	W0314 01:18:25.190302   71999 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 01:18:25.192134   71999 out.go:177] * Restarting existing kvm2 VM for "newest-cni-970859" ...
	I0314 01:18:25.193510   71999 main.go:141] libmachine: (newest-cni-970859) Calling .Start
	I0314 01:18:25.193669   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring networks are active...
	I0314 01:18:25.194428   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring network default is active
	I0314 01:18:25.194809   71999 main.go:141] libmachine: (newest-cni-970859) Ensuring network mk-newest-cni-970859 is active
	I0314 01:18:25.195263   71999 main.go:141] libmachine: (newest-cni-970859) Getting domain xml...
	I0314 01:18:25.195985   71999 main.go:141] libmachine: (newest-cni-970859) Creating domain...
	I0314 01:18:26.418558   71999 main.go:141] libmachine: (newest-cni-970859) Waiting to get IP...
	I0314 01:18:26.419739   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:26.420270   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:26.420355   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:26.420229   72034 retry.go:31] will retry after 304.875728ms: waiting for machine to come up
	I0314 01:18:26.726959   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:26.727553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:26.727580   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:26.727490   72034 retry.go:31] will retry after 384.820012ms: waiting for machine to come up
	I0314 01:18:27.114235   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:27.114701   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:27.114729   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:27.114656   72034 retry.go:31] will retry after 331.434823ms: waiting for machine to come up
	I0314 01:18:27.448203   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:27.448756   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:27.448786   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:27.448697   72034 retry.go:31] will retry after 564.139954ms: waiting for machine to come up
	I0314 01:18:28.014521   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:28.015001   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:28.015035   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:28.014981   72034 retry.go:31] will retry after 510.516518ms: waiting for machine to come up
	I0314 01:18:28.526652   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:28.527127   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:28.527158   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:28.527075   72034 retry.go:31] will retry after 777.320743ms: waiting for machine to come up
	I0314 01:18:29.306005   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:29.306439   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:29.306463   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:29.306392   72034 retry.go:31] will retry after 944.794907ms: waiting for machine to come up
	I0314 01:18:30.252501   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:30.253080   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:30.253110   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:30.253013   72034 retry.go:31] will retry after 1.254518848s: waiting for machine to come up
	I0314 01:18:31.509453   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:31.509952   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:31.509982   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:31.509892   72034 retry.go:31] will retry after 1.557179543s: waiting for machine to come up
	I0314 01:18:33.068147   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:33.068639   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:33.068663   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:33.068606   72034 retry.go:31] will retry after 2.280451267s: waiting for machine to come up
	I0314 01:18:35.351149   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:35.351617   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:35.351645   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:35.351551   72034 retry.go:31] will retry after 2.74915389s: waiting for machine to come up
	I0314 01:18:38.103880   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:38.104372   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:38.104392   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:38.104329   72034 retry.go:31] will retry after 2.335472812s: waiting for machine to come up
	I0314 01:18:40.441227   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:40.441593   71999 main.go:141] libmachine: (newest-cni-970859) DBG | unable to find current IP address of domain newest-cni-970859 in network mk-newest-cni-970859
	I0314 01:18:40.441632   71999 main.go:141] libmachine: (newest-cni-970859) DBG | I0314 01:18:40.441551   72034 retry.go:31] will retry after 3.28153208s: waiting for machine to come up
	I0314 01:18:43.724560   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.725062   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has current primary IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.725098   71999 main.go:141] libmachine: (newest-cni-970859) Found IP for machine: 192.168.72.249
	I0314 01:18:43.725109   71999 main.go:141] libmachine: (newest-cni-970859) Reserving static IP address...
	I0314 01:18:43.725569   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "newest-cni-970859", mac: "52:54:00:75:c3:8f", ip: "192.168.72.249"} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.725607   71999 main.go:141] libmachine: (newest-cni-970859) Reserved static IP address: 192.168.72.249
	I0314 01:18:43.725632   71999 main.go:141] libmachine: (newest-cni-970859) DBG | skip adding static IP to network mk-newest-cni-970859 - found existing host DHCP lease matching {name: "newest-cni-970859", mac: "52:54:00:75:c3:8f", ip: "192.168.72.249"}
	I0314 01:18:43.725652   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Getting to WaitForSSH function...
	I0314 01:18:43.725663   71999 main.go:141] libmachine: (newest-cni-970859) Waiting for SSH to be available...
	I0314 01:18:43.728108   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.728496   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.728524   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.728661   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH client type: external
	I0314 01:18:43.728688   71999 main.go:141] libmachine: (newest-cni-970859) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa (-rw-------)
	I0314 01:18:43.728729   71999 main.go:141] libmachine: (newest-cni-970859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 01:18:43.728743   71999 main.go:141] libmachine: (newest-cni-970859) DBG | About to run SSH command:
	I0314 01:18:43.728757   71999 main.go:141] libmachine: (newest-cni-970859) DBG | exit 0
	I0314 01:18:43.859120   71999 main.go:141] libmachine: (newest-cni-970859) DBG | SSH cmd err, output: <nil>: 
	I0314 01:18:43.859423   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetConfigRaw
	I0314 01:18:43.860151   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:43.862690   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.863047   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.863075   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.863303   71999 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/newest-cni-970859/config.json ...
	I0314 01:18:43.863481   71999 machine.go:94] provisionDockerMachine start ...
	I0314 01:18:43.863500   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:43.863728   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:43.866072   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.866421   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.866447   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.866581   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:43.866774   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.866923   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.867124   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:43.867324   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:43.867558   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:43.867574   71999 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 01:18:43.979541   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 01:18:43.979576   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:43.979821   71999 buildroot.go:166] provisioning hostname "newest-cni-970859"
	I0314 01:18:43.979851   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:43.980030   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:43.982684   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.983092   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:43.983132   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:43.983263   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:43.983437   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.983586   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:43.983754   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:43.983934   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:43.984110   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:43.984132   71999 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-970859 && echo "newest-cni-970859" | sudo tee /etc/hostname
	I0314 01:18:44.110336   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-970859
	
	I0314 01:18:44.110367   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.113185   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.113554   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.113597   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.113812   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.114048   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.114196   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.114360   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.114545   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:44.114730   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:44.114747   71999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-970859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-970859/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-970859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 01:18:44.237686   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:18:44.237724   71999 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 01:18:44.237788   71999 buildroot.go:174] setting up certificates
	I0314 01:18:44.237845   71999 provision.go:84] configureAuth start
	I0314 01:18:44.237864   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetMachineName
	I0314 01:18:44.238148   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:44.240546   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.240963   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.241000   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.241127   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.243553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.243943   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.243981   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.244123   71999 provision.go:143] copyHostCerts
	I0314 01:18:44.244193   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 01:18:44.244210   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 01:18:44.244317   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 01:18:44.244463   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 01:18:44.244476   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 01:18:44.244523   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 01:18:44.244632   71999 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 01:18:44.244645   71999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 01:18:44.244684   71999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 01:18:44.244785   71999 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.newest-cni-970859 san=[127.0.0.1 192.168.72.249 localhost minikube newest-cni-970859]
	I0314 01:18:44.443331   71999 provision.go:177] copyRemoteCerts
	I0314 01:18:44.443385   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 01:18:44.443413   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.446221   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.446601   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.446631   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.446830   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.447004   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.447141   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.447265   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:44.537349   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 01:18:44.562237   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 01:18:44.587253   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 01:18:44.612150   71999 provision.go:87] duration metric: took 374.287634ms to configureAuth
	I0314 01:18:44.612177   71999 buildroot.go:189] setting minikube options for container-runtime
	I0314 01:18:44.612385   71999 config.go:182] Loaded profile config "newest-cni-970859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 01:18:44.612486   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.615221   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.615572   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.615599   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.615828   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.616010   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.616164   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.616291   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.616442   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:44.616637   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:44.616661   71999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 01:18:44.902142   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 01:18:44.902171   71999 machine.go:97] duration metric: took 1.038676999s to provisionDockerMachine
	I0314 01:18:44.902183   71999 start.go:293] postStartSetup for "newest-cni-970859" (driver="kvm2")
	I0314 01:18:44.902195   71999 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 01:18:44.902216   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:44.902563   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 01:18:44.902584   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:44.905097   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.905519   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:44.905553   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:44.905712   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:44.905930   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:44.906090   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:44.906296   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:44.994738   71999 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 01:18:44.999290   71999 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 01:18:44.999315   71999 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 01:18:44.999389   71999 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 01:18:44.999491   71999 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 01:18:44.999604   71999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 01:18:45.010035   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 01:18:45.035322   71999 start.go:296] duration metric: took 133.125614ms for postStartSetup
	I0314 01:18:45.035360   71999 fix.go:56] duration metric: took 19.862539441s for fixHost
	I0314 01:18:45.035379   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.038142   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.038497   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.038526   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.038664   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.038867   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.039025   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.039150   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.039298   71999 main.go:141] libmachine: Using SSH client type: native
	I0314 01:18:45.039495   71999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.249 22 <nil> <nil>}
	I0314 01:18:45.039511   71999 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 01:18:45.151485   71999 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710379125.122499406
	
	I0314 01:18:45.151511   71999 fix.go:216] guest clock: 1710379125.122499406
	I0314 01:18:45.151520   71999 fix.go:229] Guest: 2024-03-14 01:18:45.122499406 +0000 UTC Remote: 2024-03-14 01:18:45.03536377 +0000 UTC m=+20.019852437 (delta=87.135636ms)
	I0314 01:18:45.151543   71999 fix.go:200] guest clock delta is within tolerance: 87.135636ms
	I0314 01:18:45.151550   71999 start.go:83] releasing machines lock for "newest-cni-970859", held for 19.978746044s
	I0314 01:18:45.151574   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.151883   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:45.154525   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.154940   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.154969   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.155100   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155597   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155783   71999 main.go:141] libmachine: (newest-cni-970859) Calling .DriverName
	I0314 01:18:45.155881   71999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 01:18:45.155926   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.155979   71999 ssh_runner.go:195] Run: cat /version.json
	I0314 01:18:45.155999   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHHostname
	I0314 01:18:45.158646   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.158933   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159028   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.159057   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159180   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.159289   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:45.159317   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:45.159341   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.159487   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHPort
	I0314 01:18:45.159492   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.159663   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHKeyPath
	I0314 01:18:45.159673   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:45.159817   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetSSHUsername
	I0314 01:18:45.159912   71999 sshutil.go:53] new ssh client: &{IP:192.168.72.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/newest-cni-970859/id_rsa Username:docker}
	I0314 01:18:45.244655   71999 ssh_runner.go:195] Run: systemctl --version
	I0314 01:18:45.281933   71999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 01:18:45.426282   71999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 01:18:45.433143   71999 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 01:18:45.433195   71999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 01:18:45.450560   71999 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 01:18:45.450585   71999 start.go:494] detecting cgroup driver to use...
	I0314 01:18:45.450637   71999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 01:18:45.468128   71999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 01:18:45.483378   71999 docker.go:217] disabling cri-docker service (if available) ...
	I0314 01:18:45.483434   71999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 01:18:45.498259   71999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 01:18:45.513120   71999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 01:18:45.638461   71999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 01:18:45.821284   71999 docker.go:233] disabling docker service ...
	I0314 01:18:45.821360   71999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 01:18:45.837972   71999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 01:18:45.853391   71999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 01:18:45.989964   71999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 01:18:46.118345   71999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 01:18:46.134351   71999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 01:18:46.154640   71999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 01:18:46.154694   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.166202   71999 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 01:18:46.166263   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.177611   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.191043   71999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 01:18:46.203918   71999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 01:18:46.216038   71999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 01:18:46.226398   71999 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 01:18:46.226450   71999 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 01:18:46.243188   71999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 01:18:46.253945   71999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:18:46.374419   71999 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 01:18:46.514012   71999 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 01:18:46.514093   71999 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 01:18:46.519482   71999 start.go:562] Will wait 60s for crictl version
	I0314 01:18:46.519533   71999 ssh_runner.go:195] Run: which crictl
	I0314 01:18:46.523839   71999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 01:18:46.562327   71999 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 01:18:46.562419   71999 ssh_runner.go:195] Run: crio --version
	I0314 01:18:46.592362   71999 ssh_runner.go:195] Run: crio --version
	I0314 01:18:46.625963   71999 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 01:18:46.627499   71999 main.go:141] libmachine: (newest-cni-970859) Calling .GetIP
	I0314 01:18:46.630405   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:46.630805   71999 main.go:141] libmachine: (newest-cni-970859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:c3:8f", ip: ""} in network mk-newest-cni-970859: {Iface:virbr3 ExpiryTime:2024-03-14 02:18:36 +0000 UTC Type:0 Mac:52:54:00:75:c3:8f Iaid: IPaddr:192.168.72.249 Prefix:24 Hostname:newest-cni-970859 Clientid:01:52:54:00:75:c3:8f}
	I0314 01:18:46.630834   71999 main.go:141] libmachine: (newest-cni-970859) DBG | domain newest-cni-970859 has defined IP address 192.168.72.249 and MAC address 52:54:00:75:c3:8f in network mk-newest-cni-970859
	I0314 01:18:46.631085   71999 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 01:18:46.636444   71999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:18:46.652712   71999 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0314 01:18:46.654082   71999 kubeadm.go:877] updating cluster {Name:newest-cni-970859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-970859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.249 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 01:18:46.654210   71999 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 01:18:46.654283   71999 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:18:46.693475   71999 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 01:18:46.693559   71999 ssh_runner.go:195] Run: which lz4
	I0314 01:18:46.697846   71999 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 01:18:46.702309   71999 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 01:18:46.702339   71999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0314 01:18:48.271029   71999 crio.go:444] duration metric: took 1.57321697s to copy over tarball
	I0314 01:18:48.271105   71999 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.350043845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32b29568-b5d1-4cfb-9ce7-b41496a6c40c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.350254977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32b29568-b5d1-4cfb-9ce7-b41496a6c40c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.401753605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7462651c-4e9d-4378-bc40-3dbb7f431fa1 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.401879038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7462651c-4e9d-4378-bc40-3dbb7f431fa1 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.403956047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ea933e3-d9dc-4725-9d54-b750d55f3f02 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.404356392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379131404331851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ea933e3-d9dc-4725-9d54-b750d55f3f02 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.405433002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acf4ee03-535b-4801-b483-875705bf4375 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.405568221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acf4ee03-535b-4801-b483-875705bf4375 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.405925940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acf4ee03-535b-4801-b483-875705bf4375 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.453206404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3e53fe9-7aac-4538-8ca3-017441b91125 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.453309387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3e53fe9-7aac-4538-8ca3-017441b91125 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.454814421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f180d3d1-b8a5-487d-8578-8ae5612d495d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.455296437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379131455268699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f180d3d1-b8a5-487d-8578-8ae5612d495d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.456112400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d49d7c09-8089-4102-8d49-01ed5d7d5f64 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.456166485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d49d7c09-8089-4102-8d49-01ed5d7d5f64 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.456374217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d49d7c09-8089-4102-8d49-01ed5d7d5f64 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.497166201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ecff63c-65f4-4bc8-84a6-69af2d529046 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.497237914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ecff63c-65f4-4bc8-84a6-69af2d529046 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.498480230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9c2c65e-ec8e-4125-999a-2bf742a80331 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.499116966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379131499084961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9c2c65e-ec8e-4125-999a-2bf742a80331 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.499965181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee92ebae-588d-4fa0-8634-023744f5f5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.500043299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee92ebae-588d-4fa0-8634-023744f5f5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.500227968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710377972347065014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c043e68cf38f6698396df019dd1accee542f53b7b3a728c7cd4c0fbe78740ac,PodSandboxId:2f8ccb5fcf859e04a4fab59c7e34e972573de99559938773051a0e227bb2ab29,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710377951855199729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b24e199-4e82-4c69-bb1f-11fb49d244fe,},Annotations:map[string]string{io.kubernetes.container.hash: 418dae4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127,PodSandboxId:f9b93e1152c040ff8ecf86534669abd8a8688d4c49d359cd25b5498d27cd3a1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710377949208103194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r2dml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18370dd-193e-45c2-ab72-36f8155ac015,},Annotations:map[string]string{io.kubernetes.container.hash: d4479a7a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd,PodSandboxId:2a70688c9af908e275f30529130ed766736e70b5733e7067cf8bfcb04c67b254,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710377941611129765,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjz6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b76a6d-0a4a-4e06-8
e0a-7ac69d91a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: ee399f47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca,PodSandboxId:da36646c444c7528647c3db03a0118a0d22723eb55258882a3c9032cd3f1de4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710377941618840084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad3f5f56-5c62-4dc1-a4d3-4c04efb05
00a,},Annotations:map[string]string{io.kubernetes.container.hash: c6195aec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684,PodSandboxId:074cbe23e5592f7dbbd572e3b6eec9f75ac3801ecad53223b4c4bc8ade8c0fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710377936886170815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7243ee770cce457c6955feda92fc46a2,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a,PodSandboxId:3f49785534e5e082813189762c8ad6da5222cfa3e4a6000a960a7f198a5136ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710377936887562191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721baa760f2eade26efc571ba635dfcb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d67d2646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642,PodSandboxId:190cd2c13792cb5079fd39c2b695a8d53a11925d08e0f4eeb751179abe64caa8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710377936808196049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8581d50187b10e539e7104520acb6dee,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d,PodSandboxId:a3f1204219842148966555397eb898aa648b5a21e2e336221e2549fcac78cc79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710377936795983442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-164135,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eff47c507cfd66cf030c245f9d1227f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e9fc4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee92ebae-588d-4fa0-8634-023744f5f5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.547034645Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=c4922198-01ab-499b-9fca-69b29898e3a1 name=/runtime.v1.RuntimeService/Status
	Mar 14 01:18:51 embed-certs-164135 crio[703]: time="2024-03-14 01:18:51.547132271Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c4922198-01ab-499b-9fca-69b29898e3a1 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d987b830b81fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   da36646c444c7       storage-provisioner
	9c043e68cf38f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   2f8ccb5fcf859       busybox
	a69c7aed18e08       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   f9b93e1152c04       coredns-5dd5756b68-r2dml
	2e736f3d1ff7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   da36646c444c7       storage-provisioner
	1a163fee30923       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      19 minutes ago      Running             kube-proxy                1                   2a70688c9af90       kube-proxy-wjz6d
	bacb8fc976a14       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      19 minutes ago      Running             kube-apiserver            1                   3f49785534e5e       kube-apiserver-embed-certs-164135
	066a9f5381b01       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      19 minutes ago      Running             kube-scheduler            1                   074cbe23e5592       kube-scheduler-embed-certs-164135
	dbb700c9f2e3b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      19 minutes ago      Running             kube-controller-manager   1                   190cd2c13792c       kube-controller-manager-embed-certs-164135
	24395f2c73e37       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   a3f1204219842       etcd-embed-certs-164135
	
	
	==> coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39374 - 46825 "HINFO IN 1958781166621160017.2921693539955365987. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009945916s
	
	
	==> describe nodes <==
	Name:               embed-certs-164135
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-164135
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=embed-certs-164135
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_49_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-164135
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:18:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:14:48 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:14:48 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:14:48 +0000   Thu, 14 Mar 2024 00:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:14:48 +0000   Thu, 14 Mar 2024 00:59:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.72
	  Hostname:    embed-certs-164135
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 30080eb06c724ee7b913b8bec5f80c3f
	  System UUID:                30080eb0-6c72-4ee7-b913-b8bec5f80c3f
	  Boot ID:                    81ef2eec-6092-4c2b-bffc-91c2a5c86ba1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-r2dml                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-164135                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-164135             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-164135    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wjz6d                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-164135             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-bbz2d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-164135 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-164135 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-164135 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-164135 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-164135 event: Registered Node embed-certs-164135 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-164135 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-164135 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-164135 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-164135 event: Registered Node embed-certs-164135 in Controller
	
	
	==> dmesg <==
	[Mar14 00:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054695] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045619] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920974] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.722886] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.671564] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000063] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.340656] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.065050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067327] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.208955] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.138912] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.267254] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +5.086459] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.068193] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.976211] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[Mar14 00:59] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.003403] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +3.694951] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.943130] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] <==
	{"level":"info","ts":"2024-03-14T00:58:58.936973Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:58:58.937005Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T01:08:58.964712Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":809}
	{"level":"info","ts":"2024-03-14T01:08:58.96794Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":809,"took":"2.878508ms","hash":1106108815}
	{"level":"info","ts":"2024-03-14T01:08:58.968034Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1106108815,"revision":809,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T01:13:58.97532Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1052}
	{"level":"info","ts":"2024-03-14T01:13:58.977238Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1052,"took":"1.348668ms","hash":2444296054}
	{"level":"info","ts":"2024-03-14T01:13:58.977311Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2444296054,"revision":1052,"compact-revision":809}
	{"level":"warn","ts":"2024-03-14T01:17:31.940199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.116157ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441469447039681083 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.72\" mod_revision:1460 > success:<request_put:<key:\"/registry/masterleases/192.168.50.72\" value_size:66 lease:3441469447039681081 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.72\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T01:17:31.940479Z","caller":"traceutil/trace.go:171","msg":"trace[2116856524] linearizableReadLoop","detail":"{readStateIndex:1728; appliedIndex:1727; }","duration":"172.941136ms","start":"2024-03-14T01:17:31.767508Z","end":"2024-03-14T01:17:31.94045Z","steps":["trace[2116856524] 'read index received'  (duration: 43.247758ms)","trace[2116856524] 'applied index is now lower than readState.Index'  (duration: 129.691949ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T01:17:31.940804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.328578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T01:17:31.940862Z","caller":"traceutil/trace.go:171","msg":"trace[1072718275] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1468; }","duration":"173.396438ms","start":"2024-03-14T01:17:31.767457Z","end":"2024-03-14T01:17:31.940853Z","steps":["trace[1072718275] 'agreement among raft nodes before linearized reading'  (duration: 173.057018ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T01:17:31.940819Z","caller":"traceutil/trace.go:171","msg":"trace[942122669] transaction","detail":"{read_only:false; response_revision:1468; number_of_response:1; }","duration":"193.712591ms","start":"2024-03-14T01:17:31.747085Z","end":"2024-03-14T01:17:31.940797Z","steps":["trace[942122669] 'process raft request'  (duration: 63.713061ms)","trace[942122669] 'compare'  (duration: 127.763088ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T01:17:47.947101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.874979ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T01:17:47.947193Z","caller":"traceutil/trace.go:171","msg":"trace[525703831] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1481; }","duration":"112.999771ms","start":"2024-03-14T01:17:47.83418Z","end":"2024-03-14T01:17:47.947179Z","steps":["trace[525703831] 'range keys from in-memory index tree'  (duration: 112.842362ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T01:18:52.051806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.241402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441469447039681461 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.72\" mod_revision:1523 > success:<request_put:<key:\"/registry/masterleases/192.168.50.72\" value_size:66 lease:3441469447039681459 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.72\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T01:18:52.052021Z","caller":"traceutil/trace.go:171","msg":"trace[1540834615] linearizableReadLoop","detail":"{readStateIndex:1807; appliedIndex:1806; }","duration":"176.926496ms","start":"2024-03-14T01:18:51.875073Z","end":"2024-03-14T01:18:52.051999Z","steps":["trace[1540834615] 'read index received'  (duration: 197.856µs)","trace[1540834615] 'applied index is now lower than readState.Index'  (duration: 176.727049ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T01:18:52.052106Z","caller":"traceutil/trace.go:171","msg":"trace[886108510] transaction","detail":"{read_only:false; response_revision:1531; number_of_response:1; }","duration":"342.243413ms","start":"2024-03-14T01:18:51.709847Z","end":"2024-03-14T01:18:52.05209Z","steps":["trace[886108510] 'process raft request'  (duration: 33.493745ms)","trace[886108510] 'compare'  (duration: 308.061246ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T01:18:52.0522Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T01:18:51.709827Z","time spent":"342.32994ms","remote":"127.0.0.1:59724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.72\" mod_revision:1523 > success:<request_put:<key:\"/registry/masterleases/192.168.50.72\" value_size:66 lease:3441469447039681459 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.72\" > >"}
	{"level":"warn","ts":"2024-03-14T01:18:52.052387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.791672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-03-14T01:18:52.052524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.462135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-14T01:18:52.052588Z","caller":"traceutil/trace.go:171","msg":"trace[345111553] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1531; }","duration":"177.52874ms","start":"2024-03-14T01:18:51.875049Z","end":"2024-03-14T01:18:52.052578Z","steps":["trace[345111553] 'agreement among raft nodes before linearized reading'  (duration: 177.440687ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T01:18:52.05242Z","caller":"traceutil/trace.go:171","msg":"trace[22812136] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1531; }","duration":"175.82981ms","start":"2024-03-14T01:18:51.876582Z","end":"2024-03-14T01:18:52.052412Z","steps":["trace[22812136] 'agreement among raft nodes before linearized reading'  (duration: 175.73487ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T01:18:52.270803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.513459ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441469447039681468 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1530 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T01:18:52.270901Z","caller":"traceutil/trace.go:171","msg":"trace[1618395179] transaction","detail":"{read_only:false; response_revision:1532; number_of_response:1; }","duration":"211.92931ms","start":"2024-03-14T01:18:52.058961Z","end":"2024-03-14T01:18:52.27089Z","steps":["trace[1618395179] 'process raft request'  (duration: 98.250533ms)","trace[1618395179] 'compare'  (duration: 113.285002ms)"],"step_count":2}
	
	
	==> kernel <==
	 01:18:53 up 20 min,  0 users,  load average: 0.09, 0.12, 0.09
	Linux embed-certs-164135 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] <==
	W0314 01:14:01.313555       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:14:01.313795       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:14:01.313945       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:14:01.313602       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:14:01.314154       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:14:01.315906       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:15:00.269839       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:15:01.314306       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:15:01.314365       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:15:01.314374       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:15:01.316545       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:15:01.316694       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:15:01.316703       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:16:00.269749       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 01:17:00.269964       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 01:17:01.314909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:17:01.314969       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 01:17:01.314977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 01:17:01.317290       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 01:17:01.317499       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:17:01.317557       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:18:00.269561       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] <==
	I0314 01:13:13.766403       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:13:43.254409       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:13:43.774227       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:14:13.260084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:13.785515       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:14:43.268253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:14:43.793885       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:15:08.186414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="372.932µs"
	E0314 01:15:13.273886       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:13.802146       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 01:15:21.182300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.492µs"
	E0314 01:15:43.280751       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:15:43.810466       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:13.287347       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:13.818281       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:16:43.293938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:16:43.827861       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:13.300490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:13.844190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:17:43.307492       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:17:43.860837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:18:13.314891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:18:13.873089       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 01:18:43.320586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 01:18:43.882592       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] <==
	I0314 00:59:01.847041       1 server_others.go:69] "Using iptables proxy"
	I0314 00:59:01.858935       1 node.go:141] Successfully retrieved node IP: 192.168.50.72
	I0314 00:59:01.941231       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 00:59:01.943585       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 00:59:01.950873       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:59:01.950930       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:59:01.951118       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:59:01.951148       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:59:01.952033       1 config.go:188] "Starting service config controller"
	I0314 00:59:01.952073       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:59:01.952097       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:59:01.952104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:59:01.955978       1 config.go:315] "Starting node config controller"
	I0314 00:59:01.956008       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:59:02.052232       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:59:02.052289       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:59:02.056737       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] <==
	I0314 00:58:57.948121       1 serving.go:348] Generated self-signed cert in-memory
	W0314 00:59:00.363296       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 00:59:00.363430       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:59:00.363581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 00:59:00.363613       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 00:59:00.397704       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 00:59:00.397852       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:59:00.401425       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 00:59:00.401505       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 00:59:00.402565       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 00:59:00.405726       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 00:59:00.501775       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 01:15:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:15:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:16:08 embed-certs-164135 kubelet[923]: E0314 01:16:08.166358     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:16:22 embed-certs-164135 kubelet[923]: E0314 01:16:22.166336     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:16:37 embed-certs-164135 kubelet[923]: E0314 01:16:37.166698     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:16:50 embed-certs-164135 kubelet[923]: E0314 01:16:50.167233     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:16:56 embed-certs-164135 kubelet[923]: E0314 01:16:56.189200     923 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:16:56 embed-certs-164135 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:16:56 embed-certs-164135 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:16:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:16:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:17:02 embed-certs-164135 kubelet[923]: E0314 01:17:02.166414     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:17:17 embed-certs-164135 kubelet[923]: E0314 01:17:17.166754     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:17:28 embed-certs-164135 kubelet[923]: E0314 01:17:28.169024     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:17:39 embed-certs-164135 kubelet[923]: E0314 01:17:39.165780     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:17:52 embed-certs-164135 kubelet[923]: E0314 01:17:52.166820     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:17:56 embed-certs-164135 kubelet[923]: E0314 01:17:56.192424     923 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 01:17:56 embed-certs-164135 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 01:17:56 embed-certs-164135 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 01:17:56 embed-certs-164135 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 01:17:56 embed-certs-164135 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 01:18:06 embed-certs-164135 kubelet[923]: E0314 01:18:06.166884     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:18:18 embed-certs-164135 kubelet[923]: E0314 01:18:18.166329     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:18:33 embed-certs-164135 kubelet[923]: E0314 01:18:33.166273     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	Mar 14 01:18:44 embed-certs-164135 kubelet[923]: E0314 01:18:44.168247     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bbz2d" podUID="e6df7295-58bb-4ece-841f-f93afd3f9dc9"
	
	
	==> storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] <==
	I0314 00:59:01.730500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 00:59:31.736208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] <==
	I0314 00:59:32.452058       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:59:32.467220       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:59:32.467394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:59:49.871581       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24948ad4-4184-4bcb-a96f-bdf0dcc6da5a", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0 became leader
	I0314 00:59:49.874256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:59:49.874511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0!
	I0314 00:59:49.976404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-164135_5f6332af-0ee0-4bc2-8732-4a59fe51ace0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-164135 -n embed-certs-164135
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-164135 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bbz2d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d: exit status 1 (75.46649ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bbz2d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-164135 describe pod metrics-server-57f55c9bc5-bbz2d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (382.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (95.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:16:11.745212   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
E0314 01:16:43.240274   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/custom-flannel-326260/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.11:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (251.282699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-004791" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-004791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-004791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.64µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-004791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (258.825651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-004791 logs -n 25: (1.630990583s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-326260 sudo cat                              | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo                                  | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo find                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-326260 sudo crio                             | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-326260                                       | bridge-326260                | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-573365 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:49 UTC |
	|         | disable-driver-mounts-573365                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:49 UTC | 14 Mar 24 00:51 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164135            | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-585806             | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC | 14 Mar 24 00:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-652215  | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC | 14 Mar 24 00:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:51 UTC |                     |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-004791        | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164135                 | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-164135                                  | embed-certs-164135           | jenkins | v1.32.0 | 14 Mar 24 00:52 UTC | 14 Mar 24 01:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-585806                  | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-585806                                   | no-preload-585806            | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-652215       | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-652215 | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 01:02 UTC |
	|         | default-k8s-diff-port-652215                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:53 UTC | 14 Mar 24 00:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-004791             | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC | 14 Mar 24 00:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-004791                              | old-k8s-version-004791       | jenkins | v1.32.0 | 14 Mar 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:54:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:54:03.108880   66232 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:54:03.109016   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109028   66232 out.go:304] Setting ErrFile to fd 2...
	I0314 00:54:03.109034   66232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:54:03.109233   66232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:54:03.109796   66232 out.go:298] Setting JSON to false
	I0314 00:54:03.110638   66232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5786,"bootTime":1710371857,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:54:03.110699   66232 start.go:139] virtualization: kvm guest
	I0314 00:54:03.113106   66232 out.go:177] * [old-k8s-version-004791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:54:03.114565   66232 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:54:03.115894   66232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:54:03.114598   66232 notify.go:220] Checking for updates...
	I0314 00:54:03.119029   66232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:54:03.120493   66232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:54:03.121915   66232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:54:03.123383   66232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:54:03.125258   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:54:03.125814   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.125873   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.140521   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0314 00:54:03.140889   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.141339   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.141362   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.141702   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.141898   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.143989   66232 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 00:54:03.145403   66232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:54:03.145671   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:54:03.145711   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:54:03.159852   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0314 00:54:03.160244   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:54:03.160722   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:54:03.160742   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:54:03.161088   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:54:03.161279   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:54:03.197047   66232 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 00:54:03.198624   66232 start.go:297] selected driver: kvm2
	I0314 00:54:03.198642   66232 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.198784   66232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:54:03.199455   66232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.199536   66232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 00:54:03.214619   66232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 00:54:03.214983   66232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:54:03.215045   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:54:03.215065   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:54:03.215109   66232 start.go:340] cluster config:
	{Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:54:03.215204   66232 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:54:03.217175   66232 out.go:177] * Starting "old-k8s-version-004791" primary control-plane node in "old-k8s-version-004791" cluster
	I0314 00:54:03.607045   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:03.218613   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:54:03.218655   66232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 00:54:03.218680   66232 cache.go:56] Caching tarball of preloaded images
	I0314 00:54:03.218748   66232 preload.go:173] Found /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 00:54:03.218758   66232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 00:54:03.218868   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:54:03.219079   66232 start.go:360] acquireMachinesLock for old-k8s-version-004791: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:54:06.679066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:12.759084   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:15.831164   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:21.911055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:24.983011   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:31.063042   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:34.135127   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:40.215026   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:43.287108   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:49.367033   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:52.439207   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:54:58.519055   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:01.591066   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:07.671067   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:10.743137   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:16.823021   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:19.895094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:25.975060   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:29.047059   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:35.127005   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:38.199075   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:44.279056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:47.351112   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:53.431074   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:55:56.503093   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:02.583065   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:05.655062   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:11.735056   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:14.807089   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:20.887027   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:23.959111   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:30.039063   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:33.111114   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:39.191071   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:42.263146   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:48.343110   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:51.415094   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:56:57.495078   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:00.567113   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:06.647070   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:09.719103   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:15.799052   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:18.871072   65557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0314 00:57:21.875726   65864 start.go:364] duration metric: took 3m53.150432404s to acquireMachinesLock for "no-preload-585806"
	I0314 00:57:21.875777   65864 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:21.875782   65864 fix.go:54] fixHost starting: 
	I0314 00:57:21.876117   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:21.876145   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:21.891135   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0314 00:57:21.891589   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:21.892096   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:57:21.892118   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:21.892476   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:21.892705   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:21.892868   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:57:21.894635   65864 fix.go:112] recreateIfNeeded on no-preload-585806: state=Stopped err=<nil>
	I0314 00:57:21.894652   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	W0314 00:57:21.894870   65864 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:21.896740   65864 out.go:177] * Restarting existing kvm2 VM for "no-preload-585806" ...
	I0314 00:57:21.898041   65864 main.go:141] libmachine: (no-preload-585806) Calling .Start
	I0314 00:57:21.898219   65864 main.go:141] libmachine: (no-preload-585806) Ensuring networks are active...
	I0314 00:57:21.899235   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network default is active
	I0314 00:57:21.899677   65864 main.go:141] libmachine: (no-preload-585806) Ensuring network mk-no-preload-585806 is active
	I0314 00:57:21.900069   65864 main.go:141] libmachine: (no-preload-585806) Getting domain xml...
	I0314 00:57:21.900819   65864 main.go:141] libmachine: (no-preload-585806) Creating domain...
	I0314 00:57:23.105194   65864 main.go:141] libmachine: (no-preload-585806) Waiting to get IP...
	I0314 00:57:23.106090   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.106528   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.106637   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.106516   66729 retry.go:31] will retry after 255.90484ms: waiting for machine to come up
	I0314 00:57:23.364317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.364804   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.364826   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.364757   66729 retry.go:31] will retry after 364.462281ms: waiting for machine to come up
	I0314 00:57:21.873289   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:21.873326   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873694   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:57:21.873720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:57:21.873951   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:57:21.875591   65557 machine.go:97] duration metric: took 4m37.40921849s to provisionDockerMachine
	I0314 00:57:21.875631   65557 fix.go:56] duration metric: took 4m37.430459802s for fixHost
	I0314 00:57:21.875640   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 4m37.43047806s
	W0314 00:57:21.875666   65557 start.go:713] error starting host: provision: host is not running
	W0314 00:57:21.875751   65557 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 00:57:21.875760   65557 start.go:728] Will try again in 5 seconds ...
	I0314 00:57:23.731388   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:23.731971   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:23.732021   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:23.731924   66729 retry.go:31] will retry after 426.10288ms: waiting for machine to come up
	I0314 00:57:24.159436   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.159930   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.159966   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.159889   66729 retry.go:31] will retry after 490.499532ms: waiting for machine to come up
	I0314 00:57:24.651751   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:24.652239   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:24.652273   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:24.652218   66729 retry.go:31] will retry after 719.835184ms: waiting for machine to come up
	I0314 00:57:25.374185   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:25.374702   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:25.374728   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:25.374660   66729 retry.go:31] will retry after 944.773779ms: waiting for machine to come up
	I0314 00:57:26.320707   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:26.321049   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:26.321080   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:26.320994   66729 retry.go:31] will retry after 1.088133876s: waiting for machine to come up
	I0314 00:57:27.410642   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:27.411035   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:27.411066   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:27.410989   66729 retry.go:31] will retry after 1.379863279s: waiting for machine to come up
	I0314 00:57:26.877563   65557 start.go:360] acquireMachinesLock for embed-certs-164135: {Name:mk037e8d27925138311e43eefa39d28b8c4cedd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 00:57:28.792154   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:28.792533   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:28.792564   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:28.792473   66729 retry.go:31] will retry after 1.814530842s: waiting for machine to come up
	I0314 00:57:30.609244   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:30.609658   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:30.609693   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:30.609597   66729 retry.go:31] will retry after 1.625136332s: waiting for machine to come up
	I0314 00:57:32.236903   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:32.237390   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:32.237409   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:32.237352   66729 retry.go:31] will retry after 1.788940449s: waiting for machine to come up
	I0314 00:57:34.028330   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:34.028825   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:34.028863   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:34.028779   66729 retry.go:31] will retry after 3.427808205s: waiting for machine to come up
	I0314 00:57:37.458317   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:37.458803   65864 main.go:141] libmachine: (no-preload-585806) DBG | unable to find current IP address of domain no-preload-585806 in network mk-no-preload-585806
	I0314 00:57:37.458835   65864 main.go:141] libmachine: (no-preload-585806) DBG | I0314 00:57:37.458738   66729 retry.go:31] will retry after 3.173848854s: waiting for machine to come up
	I0314 00:57:41.915825   66021 start.go:364] duration metric: took 3m51.688049305s to acquireMachinesLock for "default-k8s-diff-port-652215"
	I0314 00:57:41.915886   66021 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:57:41.915895   66021 fix.go:54] fixHost starting: 
	I0314 00:57:41.916343   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:57:41.916378   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:57:41.933352   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0314 00:57:41.933827   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:57:41.934418   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:57:41.934441   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:57:41.934820   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:57:41.934993   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:41.935162   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:57:41.936554   66021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-652215: state=Stopped err=<nil>
	I0314 00:57:41.936586   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	W0314 00:57:41.936734   66021 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:57:41.939097   66021 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-652215" ...
	I0314 00:57:40.636094   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636607   65864 main.go:141] libmachine: (no-preload-585806) Found IP for machine: 192.168.39.115
	I0314 00:57:40.636638   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has current primary IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.636645   65864 main.go:141] libmachine: (no-preload-585806) Reserving static IP address...
	I0314 00:57:40.637156   65864 main.go:141] libmachine: (no-preload-585806) Reserved static IP address: 192.168.39.115
	I0314 00:57:40.637189   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.637199   65864 main.go:141] libmachine: (no-preload-585806) Waiting for SSH to be available...
	I0314 00:57:40.637238   65864 main.go:141] libmachine: (no-preload-585806) DBG | skip adding static IP to network mk-no-preload-585806 - found existing host DHCP lease matching {name: "no-preload-585806", mac: "52:54:00:2a:3a:3b", ip: "192.168.39.115"}
	I0314 00:57:40.637254   65864 main.go:141] libmachine: (no-preload-585806) DBG | Getting to WaitForSSH function...
	I0314 00:57:40.639772   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640240   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.640272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.640445   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH client type: external
	I0314 00:57:40.640474   65864 main.go:141] libmachine: (no-preload-585806) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa (-rw-------)
	I0314 00:57:40.640508   65864 main.go:141] libmachine: (no-preload-585806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:40.640524   65864 main.go:141] libmachine: (no-preload-585806) DBG | About to run SSH command:
	I0314 00:57:40.640533   65864 main.go:141] libmachine: (no-preload-585806) DBG | exit 0
	I0314 00:57:40.770988   65864 main.go:141] libmachine: (no-preload-585806) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:40.771390   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetConfigRaw
	I0314 00:57:40.772025   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:40.774781   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775128   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.775161   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.775407   65864 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/config.json ...
	I0314 00:57:40.775636   65864 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:40.775658   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:40.775856   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.778051   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778420   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.778447   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.778517   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.778728   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.778917   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.779101   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.779283   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.779521   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.779535   65864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:40.891616   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:40.891661   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.891913   65864 buildroot.go:166] provisioning hostname "no-preload-585806"
	I0314 00:57:40.891947   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:40.892139   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:40.895038   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895441   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:40.895473   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:40.895593   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:40.895778   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.895899   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:40.896044   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:40.896206   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:40.896418   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:40.896438   65864 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-585806 && echo "no-preload-585806" | sudo tee /etc/hostname
	I0314 00:57:41.027921   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-585806
	
	I0314 00:57:41.027946   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.030406   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.030826   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.030856   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.031091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.031314   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031458   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.031656   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.031820   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.032043   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.032064   65864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-585806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-585806/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-585806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:57:41.152387   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:57:41.152420   65864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:57:41.152443   65864 buildroot.go:174] setting up certificates
	I0314 00:57:41.152451   65864 provision.go:84] configureAuth start
	I0314 00:57:41.152459   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetMachineName
	I0314 00:57:41.152713   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.155431   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155790   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.155816   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.155963   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.158272   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158691   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.158720   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.158912   65864 provision.go:143] copyHostCerts
	I0314 00:57:41.158991   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:57:41.159005   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:57:41.159094   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:57:41.159204   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:57:41.159213   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:57:41.159242   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:57:41.159299   65864 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:57:41.159306   65864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:57:41.159326   65864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:57:41.159380   65864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.no-preload-585806 san=[127.0.0.1 192.168.39.115 localhost minikube no-preload-585806]
	I0314 00:57:41.204543   65864 provision.go:177] copyRemoteCerts
	I0314 00:57:41.204599   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:57:41.204624   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.207169   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207479   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.207505   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.207717   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.207870   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.208042   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.208200   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.294111   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:57:41.319125   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:57:41.344061   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:57:41.369393   65864 provision.go:87] duration metric: took 216.929827ms to configureAuth
	I0314 00:57:41.369428   65864 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:57:41.369621   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:57:41.369690   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.372440   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.372782   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.372809   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.373062   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.373298   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373543   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.373716   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.373895   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.374097   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.374122   65864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:57:41.665162   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:57:41.665200   65864 machine.go:97] duration metric: took 889.549183ms to provisionDockerMachine
	I0314 00:57:41.665214   65864 start.go:293] postStartSetup for "no-preload-585806" (driver="kvm2")
	I0314 00:57:41.665227   65864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:57:41.665243   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.665626   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:57:41.665662   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.668351   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.668798   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.668827   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.669012   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.669412   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.669635   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.669794   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.758910   65864 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:57:41.763539   65864 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:57:41.763571   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:57:41.763645   65864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:57:41.763719   65864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:57:41.763809   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:57:41.774372   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:57:41.799961   65864 start.go:296] duration metric: took 134.732457ms for postStartSetup
	I0314 00:57:41.800006   65864 fix.go:56] duration metric: took 19.924222364s for fixHost
	I0314 00:57:41.800030   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.802714   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803178   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.803201   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.803357   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.803557   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803730   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.803888   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.804064   65864 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:41.804220   65864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0314 00:57:41.804231   65864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:57:41.915615   65864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377861.868053197
	
	I0314 00:57:41.915646   65864 fix.go:216] guest clock: 1710377861.868053197
	I0314 00:57:41.915654   65864 fix.go:229] Guest: 2024-03-14 00:57:41.868053197 +0000 UTC Remote: 2024-03-14 00:57:41.800010702 +0000 UTC m=+253.225618100 (delta=68.042495ms)
	I0314 00:57:41.915695   65864 fix.go:200] guest clock delta is within tolerance: 68.042495ms
	I0314 00:57:41.915704   65864 start.go:83] releasing machines lock for "no-preload-585806", held for 20.039948178s
	I0314 00:57:41.915733   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.916097   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:41.918713   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919145   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.919175   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.919352   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.919878   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920065   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:57:41.920140   65864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:57:41.920200   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.920257   65864 ssh_runner.go:195] Run: cat /version.json
	I0314 00:57:41.920279   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:57:41.922799   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923104   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923176   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923200   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923333   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923527   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.923572   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:41.923602   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:41.923710   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.923788   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:57:41.923884   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:41.923950   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:57:41.924091   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:57:41.924265   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:57:42.004651   65864 ssh_runner.go:195] Run: systemctl --version
	I0314 00:57:42.045673   65864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:57:42.198196   65864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:57:42.204887   65864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:57:42.204968   65864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:57:42.223088   65864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:57:42.223116   65864 start.go:494] detecting cgroup driver to use...
	I0314 00:57:42.223181   65864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:57:42.240213   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:57:42.260222   65864 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:57:42.260282   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:57:42.279489   65864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:57:42.297898   65864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:57:42.436010   65864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:57:42.591582   65864 docker.go:233] disabling docker service ...
	I0314 00:57:42.591653   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:57:42.609192   65864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:57:42.629505   65864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:57:42.788667   65864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:57:42.920745   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:57:42.947679   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:57:42.970420   65864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:57:42.970496   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.984792   65864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:57:42.984851   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:42.998350   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.011001   65864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:57:43.023341   65864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:57:43.036165   65864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:57:43.047342   65864 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:57:43.047401   65864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:57:43.063390   65864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:57:43.075512   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:57:43.214939   65864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:57:43.370092   65864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:57:43.370154   65864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:57:43.375110   65864 start.go:562] Will wait 60s for crictl version
	I0314 00:57:43.375156   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.379051   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:57:43.421498   65864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:57:43.421587   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.451281   65864 ssh_runner.go:195] Run: crio --version
	I0314 00:57:43.486171   65864 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 00:57:43.487776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetIP
	I0314 00:57:43.490910   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491299   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:57:43.491328   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:57:43.491513   65864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 00:57:43.495972   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:43.510066   65864 kubeadm.go:877] updating cluster {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:57:43.510197   65864 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 00:57:43.510235   65864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:57:43.550172   65864 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 00:57:43.550198   65864 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:57:43.550251   65864 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.550290   65864 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.550308   65864 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.550348   65864 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.550373   65864 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.550409   65864 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.550329   65864 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 00:57:43.550287   65864 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.551857   65864 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.551883   65864 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.551922   65864 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.551926   65864 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.551915   65864 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.551860   65864 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:43.552047   65864 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 00:57:43.552087   65864 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:41.940702   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Start
	I0314 00:57:41.940872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring networks are active...
	I0314 00:57:41.941571   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network default is active
	I0314 00:57:41.941942   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Ensuring network mk-default-k8s-diff-port-652215 is active
	I0314 00:57:41.942369   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Getting domain xml...
	I0314 00:57:41.943060   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Creating domain...
	I0314 00:57:43.253573   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting to get IP...
	I0314 00:57:43.254399   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254819   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.254871   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.254798   66848 retry.go:31] will retry after 250.726741ms: waiting for machine to come up
	I0314 00:57:43.507438   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507947   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.507974   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.507889   66848 retry.go:31] will retry after 261.304364ms: waiting for machine to come up
	I0314 00:57:43.770392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770932   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:43.770992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:43.770922   66848 retry.go:31] will retry after 399.951584ms: waiting for machine to come up
	I0314 00:57:44.172796   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173301   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.173330   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.173250   66848 retry.go:31] will retry after 446.71472ms: waiting for machine to come up
	I0314 00:57:44.621959   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622493   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:44.622524   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:44.622435   66848 retry.go:31] will retry after 594.760117ms: waiting for machine to come up
	I0314 00:57:43.767614   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.767919   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.781946   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.792745   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.820426   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.821936   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.874149   65864 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 00:57:43.874193   65864 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.874207   65864 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 00:57:43.874239   65864 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:43.874263   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.874281   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.909916   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 00:57:43.929648   65864 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 00:57:43.929701   65864 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:43.929756   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.929769   65864 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 00:57:43.929810   65864 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:43.929866   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958025   65864 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 00:57:43.958074   65864 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:43.958108   65864 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 00:57:43.958151   65864 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:43.958171   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 00:57:43.958188   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958124   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:43.958192   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 00:57:44.099675   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 00:57:44.099750   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 00:57:44.099805   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 00:57:44.099859   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099898   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 00:57:44.099943   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.099999   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 00:57:44.100067   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:44.185667   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.185697   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185784   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 00:57:44.185822   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 00:57:44.185833   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185860   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 00:57:44.185874   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:44.185787   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:44.191806   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.191853   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 00:57:44.191922   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:44.205188   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 00:57:44.428096   65864 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084005   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898127832s)
	I0314 00:57:47.084049   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 00:57:47.084073   65864 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084084   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.898188272s)
	I0314 00:57:47.084114   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 00:57:47.084123   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 00:57:47.084163   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.898224944s)
	I0314 00:57:47.084176   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084213   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.892265677s)
	I0314 00:57:47.084231   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 00:57:47.084261   65864 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.656144328s)
	I0314 00:57:47.084290   65864 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 00:57:47.084313   65864 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:47.084344   65864 ssh_runner.go:195] Run: which crictl
	I0314 00:57:45.219284   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219835   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:45.219865   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:45.219763   66848 retry.go:31] will retry after 838.074484ms: waiting for machine to come up
	I0314 00:57:46.059759   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060182   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:46.060212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:46.060124   66848 retry.go:31] will retry after 1.038046627s: waiting for machine to come up
	I0314 00:57:47.100208   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100623   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:47.100651   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:47.100574   66848 retry.go:31] will retry after 1.029629423s: waiting for machine to come up
	I0314 00:57:48.131899   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:48.132360   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:48.132293   66848 retry.go:31] will retry after 1.38894741s: waiting for machine to come up
	I0314 00:57:49.522727   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523219   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:49.523250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:49.523177   66848 retry.go:31] will retry after 1.498715394s: waiting for machine to come up
	I0314 00:57:51.187413   65864 ssh_runner.go:235] Completed: which crictl: (4.103045994s)
	I0314 00:57:51.187456   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103319804s)
	I0314 00:57:51.187508   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 00:57:51.187527   65864 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:57:51.187571   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.187669   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 00:57:51.236123   65864 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 00:57:51.236241   65864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:53.072155   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.88445651s)
	I0314 00:57:53.072191   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 00:57:53.072203   65864 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.835936702s)
	I0314 00:57:53.072239   65864 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 00:57:53.072216   65864 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:53.072298   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 00:57:51.024135   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024551   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:51.024591   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:51.024485   66848 retry.go:31] will retry after 1.906242033s: waiting for machine to come up
	I0314 00:57:52.931992   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932501   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:52.932532   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:52.932435   66848 retry.go:31] will retry after 2.502905013s: waiting for machine to come up
	I0314 00:57:55.041813   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969486159s)
	I0314 00:57:55.041846   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 00:57:55.041873   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:55.041921   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 00:57:56.401046   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.359096555s)
	I0314 00:57:56.401083   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 00:57:56.401125   65864 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:56.401206   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 00:57:55.438250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438696   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | unable to find current IP address of domain default-k8s-diff-port-652215 in network mk-default-k8s-diff-port-652215
	I0314 00:57:55.438728   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | I0314 00:57:55.438645   66848 retry.go:31] will retry after 4.267197677s: waiting for machine to come up
	I0314 00:57:59.709345   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.709884   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Found IP for machine: 192.168.61.7
	I0314 00:57:59.709901   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserving static IP address...
	I0314 00:57:59.709912   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has current primary IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.710329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.710365   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | skip adding static IP to network mk-default-k8s-diff-port-652215 - found existing host DHCP lease matching {name: "default-k8s-diff-port-652215", mac: "52:54:00:58:e5:b0", ip: "192.168.61.7"}
	I0314 00:57:59.710387   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Reserved static IP address: 192.168.61.7
	I0314 00:57:59.710404   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Waiting for SSH to be available...
	I0314 00:57:59.710420   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Getting to WaitForSSH function...
	I0314 00:57:59.712445   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712764   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.712794   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.712867   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH client type: external
	I0314 00:57:59.712903   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa (-rw-------)
	I0314 00:57:59.712926   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:57:59.712940   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | About to run SSH command:
	I0314 00:57:59.712946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | exit 0
	I0314 00:57:59.831120   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | SSH cmd err, output: <nil>: 
	I0314 00:57:59.831427   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetConfigRaw
	I0314 00:57:59.832230   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:57:59.834631   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835052   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.835085   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.835264   66021 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/config.json ...
	I0314 00:57:59.835458   66021 machine.go:94] provisionDockerMachine start ...
	I0314 00:57:59.835478   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:57:59.835700   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.838267   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838654   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.838681   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.838814   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.838985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839158   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.839318   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.839533   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.839750   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.839764   66021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:57:59.943463   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:57:59.943488   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943743   66021 buildroot.go:166] provisioning hostname "default-k8s-diff-port-652215"
	I0314 00:57:59.943765   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:57:59.943905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:57:59.946244   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946561   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:57:59.946592   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:57:59.946858   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:57:59.947069   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947218   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:57:59.947329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:57:59.947522   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:57:59.947682   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:57:59.947695   66021 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-652215 && echo "default-k8s-diff-port-652215" | sudo tee /etc/hostname
	I0314 00:58:00.063433   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-652215
	
	I0314 00:58:00.063467   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.066382   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.066832   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.066872   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.067051   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.067272   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067505   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.067706   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.067914   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.068139   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.068167   66021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-652215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-652215/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-652215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:01.167666   66232 start.go:364] duration metric: took 3m57.948538504s to acquireMachinesLock for "old-k8s-version-004791"
	I0314 00:58:01.167732   66232 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:01.167743   66232 fix.go:54] fixHost starting: 
	I0314 00:58:01.168159   66232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:01.168192   66232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:01.184977   66232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0314 00:58:01.185352   66232 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:01.185781   66232 main.go:141] libmachine: Using API Version  1
	I0314 00:58:01.185799   66232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:01.186133   66232 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:01.186318   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:01.186463   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetState
	I0314 00:58:01.187778   66232 fix.go:112] recreateIfNeeded on old-k8s-version-004791: state=Stopped err=<nil>
	I0314 00:58:01.187814   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	W0314 00:58:01.187966   66232 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:01.190508   66232 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-004791" ...
	I0314 00:58:00.185178   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:00.185209   66021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:00.185258   66021 buildroot.go:174] setting up certificates
	I0314 00:58:00.185270   66021 provision.go:84] configureAuth start
	I0314 00:58:00.185286   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetMachineName
	I0314 00:58:00.185558   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:00.188566   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.188946   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.188977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.189147   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.191605   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.191954   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.191981   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.192111   66021 provision.go:143] copyHostCerts
	I0314 00:58:00.192179   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:00.192193   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:00.192295   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:00.192409   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:00.192420   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:00.192449   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:00.192531   66021 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:00.192541   66021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:00.192571   66021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:00.192650   66021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-652215 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-652215 localhost minikube]
	I0314 00:58:00.441714   66021 provision.go:177] copyRemoteCerts
	I0314 00:58:00.441760   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:00.441783   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.444329   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444711   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.444740   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.444905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.445096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.445257   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.445369   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:00.529677   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:00.560670   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 00:58:00.589572   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:00.620349   66021 provision.go:87] duration metric: took 435.063551ms to configureAuth
	I0314 00:58:00.620380   66021 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:00.620576   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:00.620670   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.623250   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623633   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.623663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.623825   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.624017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624205   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.624346   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.624474   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:00.624650   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:00.624664   66021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:00.940388   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:00.940416   66021 machine.go:97] duration metric: took 1.104945308s to provisionDockerMachine
	I0314 00:58:00.940430   66021 start.go:293] postStartSetup for "default-k8s-diff-port-652215" (driver="kvm2")
	I0314 00:58:00.940443   66021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:00.940513   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:00.940829   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:00.940861   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:00.943461   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.943854   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:00.943881   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:00.944035   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:00.944233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:00.944392   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:00.944514   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.028775   66021 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:01.034219   66021 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:01.034246   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:01.034319   66021 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:01.034417   66021 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:01.034534   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:01.043871   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:01.068236   66021 start.go:296] duration metric: took 127.791208ms for postStartSetup
	I0314 00:58:01.068281   66021 fix.go:56] duration metric: took 19.152386474s for fixHost
	I0314 00:58:01.068320   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.071153   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.071519   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.071664   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.071873   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072037   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.072184   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.072339   66021 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:01.072546   66021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0314 00:58:01.072560   66021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:01.167500   66021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377881.146926820
	
	I0314 00:58:01.167531   66021 fix.go:216] guest clock: 1710377881.146926820
	I0314 00:58:01.167543   66021 fix.go:229] Guest: 2024-03-14 00:58:01.14692682 +0000 UTC Remote: 2024-03-14 00:58:01.068285678 +0000 UTC m=+250.989822406 (delta=78.641142ms)
	I0314 00:58:01.167569   66021 fix.go:200] guest clock delta is within tolerance: 78.641142ms
	I0314 00:58:01.167576   66021 start.go:83] releasing machines lock for "default-k8s-diff-port-652215", held for 19.251715411s
	I0314 00:58:01.167603   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.167900   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:01.170608   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171001   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.171041   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.171190   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171674   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171856   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:01.171937   66021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:01.171985   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.172100   66021 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:01.172128   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:01.174787   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.174963   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175180   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175209   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175343   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:01.175398   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:01.175477   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175553   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:01.175677   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175741   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:01.175803   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175880   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:01.175939   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.176003   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:01.251768   66021 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:01.289374   66021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:01.438966   66021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:01.445524   66021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:01.445595   66021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:01.463672   66021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:01.463699   66021 start.go:494] detecting cgroup driver to use...
	I0314 00:58:01.463778   66021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:01.485254   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:01.503492   66021 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:01.503552   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:01.522423   66021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:01.537421   66021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:01.664303   66021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:01.819916   66021 docker.go:233] disabling docker service ...
	I0314 00:58:01.819980   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:01.838697   66021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:01.853242   66021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:02.003570   66021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:02.146836   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:02.162421   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:02.191202   66021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:02.191272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.206856   66021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:02.206923   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.219794   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.233272   66021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:02.245213   66021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:02.259118   66021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:02.273991   66021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:02.274056   66021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:02.289319   66021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:02.300063   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:02.416447   66021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:02.566738   66021 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:02.566859   66021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:02.572193   66021 start.go:562] Will wait 60s for crictl version
	I0314 00:58:02.572234   66021 ssh_runner.go:195] Run: which crictl
	I0314 00:58:02.576144   66021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:02.615025   66021 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:02.615124   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.643201   66021 ssh_runner.go:195] Run: crio --version
	I0314 00:58:02.673207   66021 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:01.192096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .Start
	I0314 00:58:01.192279   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring networks are active...
	I0314 00:58:01.192923   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network default is active
	I0314 00:58:01.193276   66232 main.go:141] libmachine: (old-k8s-version-004791) Ensuring network mk-old-k8s-version-004791 is active
	I0314 00:58:01.193771   66232 main.go:141] libmachine: (old-k8s-version-004791) Getting domain xml...
	I0314 00:58:01.194453   66232 main.go:141] libmachine: (old-k8s-version-004791) Creating domain...
	I0314 00:58:02.495098   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting to get IP...
	I0314 00:58:02.496096   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.496509   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.496599   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.496504   66971 retry.go:31] will retry after 226.458873ms: waiting for machine to come up
	I0314 00:58:02.724812   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:02.725355   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:02.725383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:02.725305   66971 retry.go:31] will retry after 274.59062ms: waiting for machine to come up
	I0314 00:58:03.001727   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.002335   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.002486   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.002429   66971 retry.go:31] will retry after 362.865307ms: waiting for machine to come up
	I0314 00:57:58.881850   65864 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.480612113s)
	I0314 00:57:58.881884   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 00:57:58.881919   65864 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:58.881990   65864 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 00:57:59.732349   65864 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 00:57:59.732390   65864 cache_images.go:123] Successfully loaded all cached images
	I0314 00:57:59.732395   65864 cache_images.go:92] duration metric: took 16.182181374s to LoadCachedImages
	I0314 00:57:59.732406   65864 kubeadm.go:928] updating node { 192.168.39.115 8443 v1.29.0-rc.2 crio true true} ...
	I0314 00:57:59.732566   65864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-585806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:57:59.732632   65864 ssh_runner.go:195] Run: crio config
	I0314 00:57:59.780946   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:57:59.780969   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:57:59.780980   65864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:57:59.780999   65864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-585806 NodeName:no-preload-585806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:57:59.781184   65864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-585806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:57:59.781255   65864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 00:57:59.791989   65864 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:57:59.792059   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:57:59.801720   65864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 00:57:59.819248   65864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 00:57:59.837405   65864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:57:59.855909   65864 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0314 00:57:59.861139   65864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:57:59.877573   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:00.004672   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:00.025676   65864 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806 for IP: 192.168.39.115
	I0314 00:58:00.025696   65864 certs.go:194] generating shared ca certs ...
	I0314 00:58:00.025711   65864 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:00.025861   65864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:00.025912   65864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:00.025925   65864 certs.go:256] generating profile certs ...
	I0314 00:58:00.026023   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/client.key
	I0314 00:58:00.026093   65864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key.e22b08b3
	I0314 00:58:00.026150   65864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key
	I0314 00:58:00.026304   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:00.026342   65864 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:00.026355   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:00.026393   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:00.026424   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:00.026461   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:00.026510   65864 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:00.027206   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:00.087876   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:00.130974   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:00.159419   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:00.202659   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 00:58:00.248014   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:00.273362   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:00.297326   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/no-preload-585806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:00.321565   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:00.346012   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:00.370094   65864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:00.393592   65864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:00.411060   65864 ssh_runner.go:195] Run: openssl version
	I0314 00:58:00.417031   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:00.428430   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433251   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.433303   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:00.439142   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:00.451840   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:00.466706   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472024   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.472101   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:00.479004   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:00.490877   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:00.503120   65864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507926   65864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.507973   65864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:00.513957   65864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:00.526055   65864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:00.531442   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:00.538049   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:00.544709   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:00.551218   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:00.557610   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:00.564187   65864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:00.571582   65864 kubeadm.go:391] StartCluster: {Name:no-preload-585806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-585806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:00.571725   65864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:00.571793   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.625273   65864 cri.go:89] found id: ""
	I0314 00:58:00.625330   65864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:00.636554   65864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:00.636582   65864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:00.636588   65864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:00.636630   65864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:00.648360   65864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:00.649289   65864 kubeconfig.go:125] found "no-preload-585806" server: "https://192.168.39.115:8443"
	I0314 00:58:00.652107   65864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:00.664337   65864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.115
	I0314 00:58:00.664378   65864 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:00.664390   65864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:00.664436   65864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:00.702043   65864 cri.go:89] found id: ""
	I0314 00:58:00.702119   65864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:00.721052   65864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:00.732931   65864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:00.732961   65864 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:00.733015   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:00.743282   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:00.743363   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:00.753893   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:00.764545   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:00.764603   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:00.779121   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.795628   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:00.795690   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:00.807835   65864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:00.820920   65864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:00.821000   65864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:00.834341   65864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:00.844677   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:00.971502   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:01.810329   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.063422   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.144025   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:02.284020   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:02.284117   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:02.784938   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.285046   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:03.349582   65864 api_server.go:72] duration metric: took 1.065560764s to wait for apiserver process to appear ...
	I0314 00:58:03.349613   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:03.349634   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:03.350222   65864 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0314 00:58:02.674905   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetIP
	I0314 00:58:02.677914   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678319   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:02.678358   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:02.678506   66021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:02.682714   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:02.696263   66021 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:02.696407   66021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:02.696474   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:02.736997   66021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:02.737060   66021 ssh_runner.go:195] Run: which lz4
	I0314 00:58:02.741014   66021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:02.745225   66021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:02.745255   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:04.577503   66021 crio.go:444] duration metric: took 1.836515386s to copy over tarball
	I0314 00:58:04.577580   66021 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:03.367211   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.367946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.367985   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.367818   66971 retry.go:31] will retry after 545.955079ms: waiting for machine to come up
	I0314 00:58:03.915415   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:03.915920   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:03.915946   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:03.915836   66971 retry.go:31] will retry after 509.217519ms: waiting for machine to come up
	I0314 00:58:04.426378   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:04.426707   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:04.426730   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:04.426682   66971 retry.go:31] will retry after 834.85927ms: waiting for machine to come up
	I0314 00:58:05.263751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:05.264214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:05.264244   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:05.264155   66971 retry.go:31] will retry after 986.483361ms: waiting for machine to come up
	I0314 00:58:06.251927   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:06.252550   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:06.252573   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:06.252475   66971 retry.go:31] will retry after 1.151541473s: waiting for machine to come up
	I0314 00:58:07.405797   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:07.406395   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:07.406425   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:07.406349   66971 retry.go:31] will retry after 1.406754601s: waiting for machine to come up
	I0314 00:58:03.850705   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.738726   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.738753   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.738788   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.754844   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:06.754883   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:06.850175   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:06.859445   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:06.859483   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.350592   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:07.367299   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:07.367337   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.850476   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.566122   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.566165   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:08.566182   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:08.571741   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:08.571777   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:07.355046   66021 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77743394s)
	I0314 00:58:07.355081   66021 crio.go:451] duration metric: took 2.77754644s to extract the tarball
	I0314 00:58:07.355093   66021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:07.401032   66021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:07.451493   66021 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:07.451515   66021 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:07.451523   66021 kubeadm.go:928] updating node { 192.168.61.7 8444 v1.28.4 crio true true} ...
	I0314 00:58:07.451679   66021 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-652215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:07.451756   66021 ssh_runner.go:195] Run: crio config
	I0314 00:58:07.500159   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:07.500182   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:07.500192   66021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:07.500211   66021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-652215 NodeName:default-k8s-diff-port-652215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:07.500349   66021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-652215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:07.500398   66021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:07.515207   66021 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:07.515281   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:07.530918   66021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 00:58:07.558457   66021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:07.582126   66021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 00:58:07.678701   66021 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:07.684200   66021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:07.701599   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:07.825784   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:07.848241   66021 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215 for IP: 192.168.61.7
	I0314 00:58:07.848265   66021 certs.go:194] generating shared ca certs ...
	I0314 00:58:07.848286   66021 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:07.848457   66021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:07.848515   66021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:07.848529   66021 certs.go:256] generating profile certs ...
	I0314 00:58:07.848644   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/client.key
	I0314 00:58:07.935830   66021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key.b1ed833a
	I0314 00:58:07.935933   66021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key
	I0314 00:58:07.936092   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:07.936147   66021 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:07.936161   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:07.936191   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:07.936222   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:07.936255   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:07.936326   66021 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:07.937040   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:07.981116   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:08.010341   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:08.036689   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:08.064909   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 00:58:08.092883   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 00:58:08.119465   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:08.146029   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/default-k8s-diff-port-652215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:08.171735   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:08.198370   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:08.225423   66021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:08.253303   66021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:08.272262   66021 ssh_runner.go:195] Run: openssl version
	I0314 00:58:08.278047   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:08.289661   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294307   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.294365   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:08.300267   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:08.311382   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:08.322886   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328522   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.328588   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:08.335598   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:08.347048   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:08.358811   66021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365065   66021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.365113   66021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:08.372929   66021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:08.384586   66021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:08.389382   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:08.395577   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:08.401901   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:08.409134   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:08.415666   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:08.422160   66021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:08.428553   66021 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-652215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-652215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:08.428681   66021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:08.428757   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.471162   66021 cri.go:89] found id: ""
	I0314 00:58:08.471246   66021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:08.482236   66021 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:08.482258   66021 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:08.482266   66021 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:08.482318   66021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:08.492599   66021 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:08.493612   66021 kubeconfig.go:125] found "default-k8s-diff-port-652215" server: "https://192.168.61.7:8444"
	I0314 00:58:08.495896   66021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:08.509437   66021 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.7
	I0314 00:58:08.509469   66021 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:08.509498   66021 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:08.509552   66021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:08.549257   66021 cri.go:89] found id: ""
	I0314 00:58:08.549319   66021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:08.570357   66021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:08.580942   66021 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:08.580961   66021 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:08.581002   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 00:58:08.590668   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:08.590750   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:08.600638   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 00:58:08.610219   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:08.610289   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:08.620324   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.629979   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:08.630037   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:08.640264   66021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 00:58:08.650070   66021 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:08.650126   66021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:08.661293   66021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:08.671779   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.808194   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.724860   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:09.979007   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.059809   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:08.850333   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.132696   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.132738   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.349928   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.354965   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.355007   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:09.850589   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:09.855760   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:09.855791   65864 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:10.350395   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 00:58:10.356047   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 00:58:10.363343   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 00:58:10.363367   65864 api_server.go:131] duration metric: took 7.013748269s to wait for apiserver health ...
	I0314 00:58:10.363376   65864 cni.go:84] Creating CNI manager for ""
	I0314 00:58:10.363382   65864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:10.365214   65864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:10.366578   65864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:10.388294   65864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:10.416671   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:10.432468   65864 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:10.432506   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:10.432513   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:10.432522   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:10.432528   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:10.432532   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 00:58:10.432536   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:10.432541   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:10.432545   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 00:58:10.432552   65864 system_pods.go:74] duration metric: took 15.857608ms to wait for pod list to return data ...
	I0314 00:58:10.432558   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:10.435982   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:10.436009   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:10.436022   65864 node_conditions.go:105] duration metric: took 3.459248ms to run NodePressure ...
	I0314 00:58:10.436048   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:10.711752   65864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718781   65864 kubeadm.go:733] kubelet initialised
	I0314 00:58:10.718802   65864 kubeadm.go:734] duration metric: took 7.016806ms waiting for restarted kubelet to initialise ...
	I0314 00:58:10.718811   65864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:10.725838   65864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.732973   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733003   65864 pod_ready.go:81] duration metric: took 7.130935ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.733015   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "coredns-76f75df574-lptfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.733024   65864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.739301   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739330   65864 pod_ready.go:81] duration metric: took 6.292816ms for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.739344   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "etcd-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.739353   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.745734   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745764   65864 pod_ready.go:81] duration metric: took 6.401917ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.745775   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-apiserver-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.745793   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:10.823797   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823901   65864 pod_ready.go:81] duration metric: took 78.092373ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:10.823920   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:10.823930   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.221218   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221255   65864 pod_ready.go:81] duration metric: took 397.31401ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.221268   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-proxy-wpdb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.221276   65864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:11.622051   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622089   65864 pod_ready.go:81] duration metric: took 400.804067ms for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:11.622101   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "kube-scheduler-no-preload-585806" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:11.622109   65864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:12.021835   65864 pod_ready.go:97] node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021869   65864 pod_ready.go:81] duration metric: took 399.741056ms for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:12.021882   65864 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-585806" hosting pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:12.021892   65864 pod_ready.go:38] duration metric: took 1.303069721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:12.021915   65864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:12.039361   65864 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:12.039397   65864 kubeadm.go:591] duration metric: took 11.402802169s to restartPrimaryControlPlane
	I0314 00:58:12.039408   65864 kubeadm.go:393] duration metric: took 11.467836192s to StartCluster
	I0314 00:58:12.039426   65864 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.039516   65864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:12.041925   65864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:12.042230   65864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:12.044069   65864 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:12.042310   65864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:12.042489   65864 config.go:182] Loaded profile config "no-preload-585806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 00:58:12.045460   65864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:12.045470   65864 addons.go:69] Setting metrics-server=true in profile "no-preload-585806"
	I0314 00:58:12.045505   65864 addons.go:234] Setting addon metrics-server=true in "no-preload-585806"
	W0314 00:58:12.045517   65864 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:12.045461   65864 addons.go:69] Setting storage-provisioner=true in profile "no-preload-585806"
	I0314 00:58:12.045548   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045557   65864 addons.go:234] Setting addon storage-provisioner=true in "no-preload-585806"
	W0314 00:58:12.045568   65864 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:12.045595   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.045462   65864 addons.go:69] Setting default-storageclass=true in profile "no-preload-585806"
	I0314 00:58:12.045653   65864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-585806"
	I0314 00:58:12.045960   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.045989   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046009   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.046026   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.046052   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.065596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0314 00:58:12.065599   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0314 00:58:12.066126   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066229   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.066725   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066747   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.066921   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.066937   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.067164   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067341   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.067347   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.067943   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.067969   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.071254   65864 addons.go:234] Setting addon default-storageclass=true in "no-preload-585806"
	W0314 00:58:12.071275   65864 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:12.071302   65864 host.go:66] Checking if "no-preload-585806" exists ...
	I0314 00:58:12.071676   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.071703   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.089025   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0314 00:58:12.089439   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.089971   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.089987   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.091596   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0314 00:58:12.091896   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.092061   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.092552   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.092573   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.092792   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0314 00:58:12.092997   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.093009   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.093356   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.093879   65864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:12.093914   65864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:12.094125   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.094811   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.094830   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.095229   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.095432   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.097415   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.099392   65864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:12.100577   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:12.100594   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:12.100618   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.103892   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104467   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.104489   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.104667   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.106971   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.107150   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.107313   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.111900   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0314 00:58:12.112581   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.113114   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.113130   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.113580   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.113776   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.115360   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.115676   65864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.115691   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:12.115707   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.117453   65864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0314 00:58:12.118029   65864 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:12.118488   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.118776   65864 main.go:141] libmachine: Using API Version  1
	I0314 00:58:12.118793   65864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:12.118960   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.118982   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.119173   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.119729   65864 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:12.119945   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetState
	I0314 00:58:12.121529   65864 main.go:141] libmachine: (no-preload-585806) Calling .DriverName
	I0314 00:58:12.123821   65864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:08.814918   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:08.815383   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:08.815414   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:08.815336   66971 retry.go:31] will retry after 1.619075545s: waiting for machine to come up
	I0314 00:58:10.435841   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:10.436245   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:10.436272   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:10.436204   66971 retry.go:31] will retry after 2.396707044s: waiting for machine to come up
	I0314 00:58:12.834287   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:12.834691   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:12.834720   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:12.834649   66971 retry.go:31] will retry after 2.803309164s: waiting for machine to come up
	I0314 00:58:12.122163   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.125529   65864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.125549   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:12.125566   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHHostname
	I0314 00:58:12.125622   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.128908   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.128920   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.129475   65864 main.go:141] libmachine: (no-preload-585806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3a:3b", ip: ""} in network mk-no-preload-585806: {Iface:virbr4 ExpiryTime:2024-03-14 01:48:37 +0000 UTC Type:0 Mac:52:54:00:2a:3a:3b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:no-preload-585806 Clientid:01:52:54:00:2a:3a:3b}
	I0314 00:58:12.129499   65864 main.go:141] libmachine: (no-preload-585806) DBG | domain no-preload-585806 has defined IP address 192.168.39.115 and MAC address 52:54:00:2a:3a:3b in network mk-no-preload-585806
	I0314 00:58:12.129653   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHPort
	I0314 00:58:12.129851   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHKeyPath
	I0314 00:58:12.130023   65864 main.go:141] libmachine: (no-preload-585806) Calling .GetSSHUsername
	I0314 00:58:12.130149   65864 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/no-preload-585806/id_rsa Username:docker}
	I0314 00:58:12.258865   65864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:12.279758   65864 node_ready.go:35] waiting up to 6m0s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:12.393255   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:12.393276   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:12.396083   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:12.401894   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:12.442825   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:12.442852   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:12.516967   65864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:12.516997   65864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:12.549493   65864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:13.476386   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080265638s)
	I0314 00:58:13.476460   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476489   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.476397   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074462931s)
	I0314 00:58:13.476626   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.476639   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477023   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477039   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477036   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477047   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477055   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477066   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477071   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477087   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477094   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.477100   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.477458   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.477491   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477498   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.477550   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.477566   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.489141   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.489174   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.489460   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.489522   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.489541   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.586956   65864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037420385s)
	I0314 00:58:13.587013   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587029   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587367   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587386   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587396   65864 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:13.587405   65864 main.go:141] libmachine: (no-preload-585806) Calling .Close
	I0314 00:58:13.587406   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587781   65864 main.go:141] libmachine: (no-preload-585806) DBG | Closing plugin on server side
	I0314 00:58:13.587856   65864 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:13.587878   65864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:13.587910   65864 addons.go:470] Verifying addon metrics-server=true in "no-preload-585806"
	I0314 00:58:13.590325   65864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:13.591691   65864 addons.go:505] duration metric: took 1.549382287s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:10.176806   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:10.176884   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:10.677299   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.177069   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:11.214552   66021 api_server.go:72] duration metric: took 1.037744324s to wait for apiserver process to appear ...
	I0314 00:58:11.214587   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:11.214610   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:11.215138   66021 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0314 00:58:11.714667   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.616838   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.616877   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.616893   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.658759   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:58:14.658796   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:58:14.715024   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:14.733591   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:14.733634   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.214665   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.234066   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.234110   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:15.715301   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:15.721645   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:58:15.721675   66021 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:58:16.215286   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 00:58:16.222564   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 00:58:16.232709   66021 api_server.go:141] control plane version: v1.28.4
	I0314 00:58:16.232737   66021 api_server.go:131] duration metric: took 5.018142072s to wait for apiserver health ...
	I0314 00:58:16.232747   66021 cni.go:84] Creating CNI manager for ""
	I0314 00:58:16.232756   66021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:16.234470   66021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:16.235612   66021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:58:16.248214   66021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:58:16.277370   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:58:16.288623   66021 system_pods.go:59] 8 kube-system pods found
	I0314 00:58:16.288650   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:58:16.288657   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:58:16.288663   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:58:16.288671   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:58:16.288677   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:58:16.288682   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:58:16.288687   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:58:16.288690   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 00:58:16.288696   66021 system_pods.go:74] duration metric: took 11.305344ms to wait for pod list to return data ...
	I0314 00:58:16.288702   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:58:16.292286   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:58:16.292308   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 00:58:16.292320   66021 node_conditions.go:105] duration metric: took 3.61409ms to run NodePressure ...
	I0314 00:58:16.292335   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:16.512870   66021 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517507   66021 kubeadm.go:733] kubelet initialised
	I0314 00:58:16.517529   66021 kubeadm.go:734] duration metric: took 4.638745ms waiting for restarted kubelet to initialise ...
	I0314 00:58:16.517536   66021 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:16.523002   66021 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.527973   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.527992   66021 pod_ready.go:81] duration metric: took 4.971635ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.527999   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.528005   66021 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.532109   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532130   66021 pod_ready.go:81] duration metric: took 4.119441ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.532138   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.532144   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.536921   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536947   66021 pod_ready.go:81] duration metric: took 4.797369ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.536957   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.536963   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:16.681145   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681174   66021 pod_ready.go:81] duration metric: took 144.203955ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:16.681183   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:16.681189   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.081346   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081372   66021 pod_ready.go:81] duration metric: took 400.176843ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.081380   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-proxy-s7dwp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.081386   66021 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.481726   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481760   66021 pod_ready.go:81] duration metric: took 400.364366ms for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.481775   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.481784   66021 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.881076   66021 pod_ready.go:97] node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881101   66021 pod_ready.go:81] duration metric: took 399.308565ms for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 00:58:17.881112   66021 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-652215" hosting pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:17.881118   66021 pod_ready.go:38] duration metric: took 1.363574607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.881137   66021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:58:17.893680   66021 ops.go:34] apiserver oom_adj: -16
	I0314 00:58:17.893703   66021 kubeadm.go:591] duration metric: took 9.411432465s to restartPrimaryControlPlane
	I0314 00:58:17.893711   66021 kubeadm.go:393] duration metric: took 9.465165177s to StartCluster
	I0314 00:58:17.893725   66021 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.893783   66021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:17.895292   66021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:17.895523   66021 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:58:17.897956   66021 out.go:177] * Verifying Kubernetes components...
	I0314 00:58:17.895646   66021 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:58:17.895730   66021 config.go:182] Loaded profile config "default-k8s-diff-port-652215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:17.898002   66021 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.898023   66021 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899554   66021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:17.897994   66021 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-652215"
	I0314 00:58:17.899681   66021 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899693   66021 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:58:17.898063   66021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-652215"
	I0314 00:58:17.899720   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.898068   66021 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.899784   66021 addons.go:243] addon metrics-server should already be in state true
	I0314 00:58:17.899811   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.900048   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900077   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900111   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900141   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.900171   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.900188   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.915185   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0314 00:58:17.915208   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0314 00:58:17.915576   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.915710   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.916152   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916171   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916305   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.916330   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.916511   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916671   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.916831   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.917105   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.917132   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.918252   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0314 00:58:17.918697   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.919230   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.919250   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.919523   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.920110   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920171   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.920214   66021 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-652215"
	W0314 00:58:17.920231   66021 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:58:17.920262   66021 host.go:66] Checking if "default-k8s-diff-port-652215" exists ...
	I0314 00:58:17.920646   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.920681   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.932173   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0314 00:58:17.932593   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.933094   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.933117   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.933473   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.933707   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.934448   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0314 00:58:17.934516   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0314 00:58:17.934891   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935069   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.935423   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935443   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935577   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.935595   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.935663   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.937699   66021 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:17.936039   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.936042   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.938931   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:58:17.938948   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:58:17.938977   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.939211   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.939596   66021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:17.939625   66021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:17.941065   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.942845   66021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:15.639214   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:15.639656   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:15.639696   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:15.639617   66971 retry.go:31] will retry after 3.192360952s: waiting for machine to come up
	I0314 00:58:14.292798   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:16.784397   65864 node_ready.go:53] node "no-preload-585806" has status "Ready":"False"
	I0314 00:58:17.284580   65864 node_ready.go:49] node "no-preload-585806" has status "Ready":"True"
	I0314 00:58:17.284611   65864 node_ready.go:38] duration metric: took 5.004823398s for node "no-preload-585806" to be "Ready" ...
	I0314 00:58:17.284623   65864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:17.290888   65864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297127   65864 pod_ready.go:92] pod "coredns-76f75df574-lptfk" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:17.297152   65864 pod_ready.go:81] duration metric: took 6.235547ms for pod "coredns-76f75df574-lptfk" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.297163   65864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:17.944316   66021 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:17.942113   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.942648   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.944350   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:58:17.944376   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.944371   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.944451   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.944500   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.944675   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.944826   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.947097   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947474   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.947507   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.947640   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.947816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.947960   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.948095   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:17.957502   66021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0314 00:58:17.957899   66021 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:17.958344   66021 main.go:141] libmachine: Using API Version  1
	I0314 00:58:17.958364   66021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:17.958645   66021 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:17.958816   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetState
	I0314 00:58:17.960222   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .DriverName
	I0314 00:58:17.960577   66021 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:17.960591   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:58:17.960610   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHHostname
	I0314 00:58:17.963238   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963676   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:e5:b0", ip: ""} in network mk-default-k8s-diff-port-652215: {Iface:virbr1 ExpiryTime:2024-03-14 01:57:53 +0000 UTC Type:0 Mac:52:54:00:58:e5:b0 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-652215 Clientid:01:52:54:00:58:e5:b0}
	I0314 00:58:17.963698   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | domain default-k8s-diff-port-652215 has defined IP address 192.168.61.7 and MAC address 52:54:00:58:e5:b0 in network mk-default-k8s-diff-port-652215
	I0314 00:58:17.963850   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHPort
	I0314 00:58:17.963995   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHKeyPath
	I0314 00:58:17.964114   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .GetSSHUsername
	I0314 00:58:17.964213   66021 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/default-k8s-diff-port-652215/id_rsa Username:docker}
	I0314 00:58:18.098402   66021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:18.116854   66021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:18.232236   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:58:18.232256   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:58:18.238208   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:58:18.261851   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:58:18.263856   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:58:18.263877   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:58:18.325498   66021 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:18.325520   66021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:58:18.391369   66021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:58:19.482825   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24458075s)
	I0314 00:58:19.482879   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.482891   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.482959   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221078542s)
	I0314 00:58:19.483000   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483017   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483196   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483216   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483212   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483224   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483233   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483242   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483258   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.483273   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.483280   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.483288   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.483551   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.483590   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.484020   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.484105   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.484148   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.491315   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.491332   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.491552   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.491583   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583024   66021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.191597961s)
	I0314 00:58:19.583083   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583096   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583362   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583400   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583421   66021 main.go:141] libmachine: Making call to close driver server
	I0314 00:58:19.583435   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583447   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) Calling .Close
	I0314 00:58:19.583724   66021 main.go:141] libmachine: (default-k8s-diff-port-652215) DBG | Closing plugin on server side
	I0314 00:58:19.583762   66021 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:58:19.583815   66021 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:58:19.583837   66021 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-652215"
	I0314 00:58:19.585771   66021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:58:19.587252   66021 addons.go:505] duration metric: took 1.691609624s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:58:20.120924   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:18.833069   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:18.833438   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | unable to find current IP address of domain old-k8s-version-004791 in network mk-old-k8s-version-004791
	I0314 00:58:18.833470   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | I0314 00:58:18.833388   66971 retry.go:31] will retry after 5.67556795s: waiting for machine to come up
	I0314 00:58:19.304162   65864 pod_ready.go:102] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:20.804158   65864 pod_ready.go:92] pod "etcd-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.804180   65864 pod_ready.go:81] duration metric: took 3.507009199s for pod "etcd-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.804191   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810040   65864 pod_ready.go:92] pod "kube-apiserver-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.810065   65864 pod_ready.go:81] duration metric: took 5.865494ms for pod "kube-apiserver-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.810080   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815049   65864 pod_ready.go:92] pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.815077   65864 pod_ready.go:81] duration metric: took 4.984409ms for pod "kube-controller-manager-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.815086   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821316   65864 pod_ready.go:92] pod "kube-proxy-wpdb9" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:20.821342   65864 pod_ready.go:81] duration metric: took 6.249664ms for pod "kube-proxy-wpdb9" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:20.821354   65864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828500   65864 pod_ready.go:92] pod "kube-scheduler-no-preload-585806" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:21.828524   65864 pod_ready.go:81] duration metric: took 1.00716238s for pod "kube-scheduler-no-preload-585806" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:21.828533   65864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:22.621791   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:25.121386   66021 node_ready.go:53] node "default-k8s-diff-port-652215" has status "Ready":"False"
	I0314 00:58:26.059625   65557 start.go:364] duration metric: took 59.181975988s to acquireMachinesLock for "embed-certs-164135"
	I0314 00:58:26.059670   65557 start.go:96] Skipping create...Using existing machine configuration
	I0314 00:58:26.059681   65557 fix.go:54] fixHost starting: 
	I0314 00:58:26.060084   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:58:26.060117   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:58:26.079338   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0314 00:58:26.079705   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:58:26.080159   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:58:26.080181   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:58:26.080547   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:58:26.080747   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:26.080907   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:58:26.082633   65557 fix.go:112] recreateIfNeeded on embed-certs-164135: state=Stopped err=<nil>
	I0314 00:58:26.082671   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	W0314 00:58:26.082861   65557 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 00:58:26.085610   65557 out.go:177] * Restarting existing kvm2 VM for "embed-certs-164135" ...
	I0314 00:58:24.511666   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512275   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has current primary IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.512307   66232 main.go:141] libmachine: (old-k8s-version-004791) Found IP for machine: 192.168.72.11
	I0314 00:58:24.512321   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserving static IP address...
	I0314 00:58:24.512704   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.512726   66232 main.go:141] libmachine: (old-k8s-version-004791) Reserved static IP address: 192.168.72.11
	I0314 00:58:24.512740   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | skip adding static IP to network mk-old-k8s-version-004791 - found existing host DHCP lease matching {name: "old-k8s-version-004791", mac: "52:54:00:32:09:2e", ip: "192.168.72.11"}
	I0314 00:58:24.512751   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Getting to WaitForSSH function...
	I0314 00:58:24.512763   66232 main.go:141] libmachine: (old-k8s-version-004791) Waiting for SSH to be available...
	I0314 00:58:24.515177   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515623   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.515657   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.515863   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH client type: external
	I0314 00:58:24.515892   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa (-rw-------)
	I0314 00:58:24.515924   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:24.515940   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | About to run SSH command:
	I0314 00:58:24.515956   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | exit 0
	I0314 00:58:24.642866   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:24.643186   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetConfigRaw
	I0314 00:58:24.643853   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:24.645950   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646309   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.646338   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.646566   66232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/config.json ...
	I0314 00:58:24.646801   66232 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:24.646823   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:24.647032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.649249   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649588   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.649618   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.649752   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.649926   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650131   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.650315   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.650487   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.650664   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.650675   66232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:24.763290   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:24.763320   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763558   66232 buildroot.go:166] provisioning hostname "old-k8s-version-004791"
	I0314 00:58:24.763592   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:24.763745   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.766422   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766719   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.766745   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.766894   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.767075   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767238   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.767388   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.767564   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.767776   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.767795   66232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-004791 && echo "old-k8s-version-004791" | sudo tee /etc/hostname
	I0314 00:58:24.893811   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-004791
	
	I0314 00:58:24.893844   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:24.896527   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.896909   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:24.896937   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:24.897096   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:24.897277   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897455   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:24.897623   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:24.897814   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:24.897979   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:24.897995   66232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-004791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-004791/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-004791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:25.021661   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:25.021695   66232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:25.021722   66232 buildroot.go:174] setting up certificates
	I0314 00:58:25.021735   66232 provision.go:84] configureAuth start
	I0314 00:58:25.021766   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetMachineName
	I0314 00:58:25.022032   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:25.024687   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.024989   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.025030   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.025155   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.027609   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.027948   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.027977   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.028079   66232 provision.go:143] copyHostCerts
	I0314 00:58:25.028145   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:25.028155   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:25.028208   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:25.028333   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:25.028342   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:25.028361   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:25.028421   66232 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:25.028428   66232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:25.028445   66232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:25.028532   66232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-004791 san=[127.0.0.1 192.168.72.11 localhost minikube old-k8s-version-004791]
	I0314 00:58:25.338174   66232 provision.go:177] copyRemoteCerts
	I0314 00:58:25.338239   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:25.338272   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.340651   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341044   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.341084   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.341243   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.341445   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.341613   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.341779   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.437346   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 00:58:25.464534   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 00:58:25.491186   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:25.520290   66232 provision.go:87] duration metric: took 498.536449ms to configureAuth
	I0314 00:58:25.520330   66232 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:25.520551   66232 config.go:182] Loaded profile config "old-k8s-version-004791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 00:58:25.520631   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.523579   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.523954   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.523982   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.524176   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.524418   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524604   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.524841   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.525032   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.525233   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.525267   66232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:25.813702   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:25.813724   66232 machine.go:97] duration metric: took 1.166910056s to provisionDockerMachine
	I0314 00:58:25.813735   66232 start.go:293] postStartSetup for "old-k8s-version-004791" (driver="kvm2")
	I0314 00:58:25.813745   66232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:25.813767   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:25.814102   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:25.814132   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.816973   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817316   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.817351   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.817496   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.817695   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.817895   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.818065   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:25.905564   66232 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:25.910139   66232 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:25.910168   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:25.910237   66232 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:25.910315   66232 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:25.910406   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:25.919998   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:25.946236   66232 start.go:296] duration metric: took 132.483335ms for postStartSetup
	I0314 00:58:25.946270   66232 fix.go:56] duration metric: took 24.778527973s for fixHost
	I0314 00:58:25.946291   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:25.948993   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:25.949382   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:25.949491   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:25.949674   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.949839   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:25.950008   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:25.950178   66232 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:25.950327   66232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0314 00:58:25.950337   66232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:26.059477   66232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377906.045276928
	
	I0314 00:58:26.059498   66232 fix.go:216] guest clock: 1710377906.045276928
	I0314 00:58:26.059504   66232 fix.go:229] Guest: 2024-03-14 00:58:26.045276928 +0000 UTC Remote: 2024-03-14 00:58:25.946273472 +0000 UTC m=+262.884746009 (delta=99.003456ms)
	I0314 00:58:26.059522   66232 fix.go:200] guest clock delta is within tolerance: 99.003456ms
	I0314 00:58:26.059528   66232 start.go:83] releasing machines lock for "old-k8s-version-004791", held for 24.891823469s
	I0314 00:58:26.059556   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.059832   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:26.062667   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.063126   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.063322   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064047   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064262   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .DriverName
	I0314 00:58:26.064348   66232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:26.064396   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.064505   66232 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:26.064530   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHHostname
	I0314 00:58:26.067308   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067569   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067602   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.067626   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.067738   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.067912   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068059   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068063   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:26.068095   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:26.068199   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHPort
	I0314 00:58:26.068210   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.068347   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHKeyPath
	I0314 00:58:26.068538   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetSSHUsername
	I0314 00:58:26.068717   66232 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/old-k8s-version-004791/id_rsa Username:docker}
	I0314 00:58:26.182072   66232 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:26.188630   66232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:26.337675   66232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:26.344107   66232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:26.344178   66232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:26.363679   66232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:26.363704   66232 start.go:494] detecting cgroup driver to use...
	I0314 00:58:26.363770   66232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:26.380626   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:26.397287   66232 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:26.397354   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:26.411921   66232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:26.428111   66232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:26.548503   66232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:26.718585   66232 docker.go:233] disabling docker service ...
	I0314 00:58:26.718667   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:26.737814   66232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:26.759326   66232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:26.907505   66232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:27.052915   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:27.074324   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:27.096627   66232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 00:58:27.096688   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.109204   66232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:27.109280   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.122529   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.135542   66232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:27.149084   66232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:27.166838   66232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:27.178148   66232 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:27.178201   66232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:27.194015   66232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:27.206652   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:27.363680   66232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:27.546218   66232 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:27.546291   66232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:27.552622   66232 start.go:562] Will wait 60s for crictl version
	I0314 00:58:27.552693   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:27.557087   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:27.600271   66232 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:27.600369   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.631397   66232 ssh_runner.go:195] Run: crio --version
	I0314 00:58:27.670760   66232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 00:58:27.671963   66232 main.go:141] libmachine: (old-k8s-version-004791) Calling .GetIP
	I0314 00:58:27.674890   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675324   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:09:2e", ip: ""} in network mk-old-k8s-version-004791: {Iface:virbr3 ExpiryTime:2024-03-14 01:48:01 +0000 UTC Type:0 Mac:52:54:00:32:09:2e Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:old-k8s-version-004791 Clientid:01:52:54:00:32:09:2e}
	I0314 00:58:27.675352   66232 main.go:141] libmachine: (old-k8s-version-004791) DBG | domain old-k8s-version-004791 has defined IP address 192.168.72.11 and MAC address 52:54:00:32:09:2e in network mk-old-k8s-version-004791
	I0314 00:58:27.675617   66232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:27.680460   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:27.694168   66232 kubeadm.go:877] updating cluster {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:27.694308   66232 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 00:58:27.694363   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:27.750541   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:27.750608   66232 ssh_runner.go:195] Run: which lz4
	I0314 00:58:27.755341   66232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:27.759948   66232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:27.759972   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 00:58:23.835559   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:25.840794   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:28.343597   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:26.087053   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Start
	I0314 00:58:26.087223   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring networks are active...
	I0314 00:58:26.087972   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network default is active
	I0314 00:58:26.088454   65557 main.go:141] libmachine: (embed-certs-164135) Ensuring network mk-embed-certs-164135 is active
	I0314 00:58:26.088918   65557 main.go:141] libmachine: (embed-certs-164135) Getting domain xml...
	I0314 00:58:26.089551   65557 main.go:141] libmachine: (embed-certs-164135) Creating domain...
	I0314 00:58:27.427891   65557 main.go:141] libmachine: (embed-certs-164135) Waiting to get IP...
	I0314 00:58:27.428743   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.429231   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.429301   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.429210   67191 retry.go:31] will retry after 285.906124ms: waiting for machine to come up
	I0314 00:58:27.716658   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.717175   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.717209   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.717136   67191 retry.go:31] will retry after 261.410434ms: waiting for machine to come up
	I0314 00:58:27.980701   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:27.981229   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:27.981260   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:27.981171   67191 retry.go:31] will retry after 383.915233ms: waiting for machine to come up
	I0314 00:58:28.366876   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.367381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.367410   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.367323   67191 retry.go:31] will retry after 409.436475ms: waiting for machine to come up
	I0314 00:58:28.778072   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:28.778576   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:28.778610   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:28.778531   67191 retry.go:31] will retry after 645.067189ms: waiting for machine to come up
	I0314 00:58:25.621956   66021 node_ready.go:49] node "default-k8s-diff-port-652215" has status "Ready":"True"
	I0314 00:58:25.621981   66021 node_ready.go:38] duration metric: took 7.505100774s for node "default-k8s-diff-port-652215" to be "Ready" ...
	I0314 00:58:25.622001   66021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:58:25.629545   66021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639732   66021 pod_ready.go:92] pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.639756   66021 pod_ready.go:81] duration metric: took 10.187009ms for pod "coredns-5dd5756b68-cc7x2" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.639764   66021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645147   66021 pod_ready.go:92] pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.645169   66021 pod_ready.go:81] duration metric: took 5.39858ms for pod "etcd-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.645177   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654707   66021 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.654733   66021 pod_ready.go:81] duration metric: took 9.549239ms for pod "kube-apiserver-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.654744   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662542   66021 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:25.662564   66021 pod_ready.go:81] duration metric: took 7.811214ms for pod "kube-controller-manager-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:25.662573   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022161   66021 pod_ready.go:92] pod "kube-proxy-s7dwp" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:26.022183   66021 pod_ready.go:81] duration metric: took 359.604841ms for pod "kube-proxy-s7dwp" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:26.022192   66021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:28.034582   66021 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.648218   66232 crio.go:444] duration metric: took 1.892901715s to copy over tarball
	I0314 00:58:29.648301   66232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:32.846478   66232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198145754s)
	I0314 00:58:32.846506   66232 crio.go:451] duration metric: took 3.198257099s to extract the tarball
	I0314 00:58:32.846513   66232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:32.893263   66232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:32.930449   66232 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 00:58:32.930473   66232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 00:58:32.930511   66232 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.930536   66232 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.930550   66232 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.930559   66232 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.930802   66232 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.930888   66232 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 00:58:32.930940   66232 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:32.931147   66232 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.931888   66232 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:32.931948   66232 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:32.932319   66232 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:32.932341   66232 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 00:58:32.932374   66232 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:32.932381   66232 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:32.932370   66232 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:32.932419   66232 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:30.836400   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:32.841831   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:29.425434   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:29.425984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:29.426008   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:29.425942   67191 retry.go:31] will retry after 703.398838ms: waiting for machine to come up
	I0314 00:58:30.130649   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.131265   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.131297   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.131224   67191 retry.go:31] will retry after 787.377618ms: waiting for machine to come up
	I0314 00:58:30.919951   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:30.920381   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:30.920416   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:30.920331   67191 retry.go:31] will retry after 1.211901471s: waiting for machine to come up
	I0314 00:58:32.133720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:32.134308   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:32.134337   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:32.134254   67191 retry.go:31] will retry after 1.852403479s: waiting for machine to come up
	I0314 00:58:33.987895   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:33.988474   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:33.988503   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:33.988426   67191 retry.go:31] will retry after 2.321557159s: waiting for machine to come up
	I0314 00:58:30.530679   66021 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace has status "Ready":"True"
	I0314 00:58:30.530711   66021 pod_ready.go:81] duration metric: took 4.508510256s for pod "kube-scheduler-default-k8s-diff-port-652215" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:30.530725   66021 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	I0314 00:58:32.539227   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:34.543975   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:33.154008   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 00:58:33.158391   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.163815   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.167903   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.168224   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.169039   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.185385   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.418931   66232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 00:58:33.418981   66232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 00:58:33.419052   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419063   66232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.419031   66232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 00:58:33.419118   66232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 00:58:33.419141   66232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.419173   66232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 00:58:33.419200   66232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.419232   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419099   66232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.419310   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419177   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419143   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419142   66232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 00:58:33.419396   66232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.419419   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.419144   66232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.419472   66232 ssh_runner.go:195] Run: which crictl
	I0314 00:58:33.436581   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 00:58:33.436585   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 00:58:33.436693   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 00:58:33.436697   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 00:58:33.436760   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 00:58:33.436812   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 00:58:33.436821   66232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 00:58:33.605693   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 00:58:33.605727   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 00:58:33.605788   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 00:58:33.605799   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 00:58:33.605879   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 00:58:33.605912   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 00:58:33.605952   66232 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 00:58:33.844071   66232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:58:33.989885   66232 cache_images.go:92] duration metric: took 1.059398314s to LoadCachedImages
	W0314 00:58:33.990001   66232 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18375-4912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0314 00:58:33.990027   66232 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.20.0 crio true true} ...
	I0314 00:58:33.990157   66232 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-004791 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:33.990220   66232 ssh_runner.go:195] Run: crio config
	I0314 00:58:34.044723   66232 cni.go:84] Creating CNI manager for ""
	I0314 00:58:34.044746   66232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:34.044759   66232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:34.044775   66232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-004791 NodeName:old-k8s-version-004791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 00:58:34.044900   66232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-004791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:34.044958   66232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 00:58:34.059679   66232 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:34.059734   66232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:34.073682   66232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0314 00:58:34.095098   66232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:34.113899   66232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0314 00:58:34.132875   66232 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:34.137285   66232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:34.151566   66232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:34.276059   66232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:34.295472   66232 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791 for IP: 192.168.72.11
	I0314 00:58:34.295496   66232 certs.go:194] generating shared ca certs ...
	I0314 00:58:34.295528   66232 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.295718   66232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:34.295779   66232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:34.295794   66232 certs.go:256] generating profile certs ...
	I0314 00:58:34.295909   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/client.key
	I0314 00:58:34.295968   66232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key.c57f8e0c
	I0314 00:58:34.296022   66232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key
	I0314 00:58:34.296176   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:34.296213   66232 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:34.296224   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:34.296255   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:34.296296   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:34.296336   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:34.296397   66232 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:34.297181   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:34.351330   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:34.389003   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:34.439281   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:34.476704   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 00:58:34.524931   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:34.554905   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:34.584216   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/old-k8s-version-004791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:58:34.610661   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:34.636484   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:34.662623   66232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:34.692373   66232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:34.714670   66232 ssh_runner.go:195] Run: openssl version
	I0314 00:58:34.721394   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:34.734219   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739692   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.739767   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:34.746281   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:34.758520   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:34.770960   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.775963   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.776034   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:34.782485   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:34.795932   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:34.808632   66232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814277   66232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.814338   66232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:34.820985   66232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:34.832959   66232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:34.838642   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:34.845061   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:34.852475   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:34.859861   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:34.866413   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:34.873327   66232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:34.880000   66232 kubeadm.go:391] StartCluster: {Name:old-k8s-version-004791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-004791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:34.880134   66232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:34.880194   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:34.927555   66232 cri.go:89] found id: ""
	I0314 00:58:34.927623   66232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:34.939638   66232 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:34.939668   66232 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:34.939677   66232 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:34.939741   66232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:34.950530   66232 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:34.952013   66232 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-004791" does not appear in /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:58:34.952997   66232 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-4912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-004791" cluster setting kubeconfig missing "old-k8s-version-004791" context setting]
	I0314 00:58:34.954526   66232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:34.956927   66232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:34.968566   66232 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.11
	I0314 00:58:34.968605   66232 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:34.968619   66232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:34.968700   66232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:35.007848   66232 cri.go:89] found id: ""
	I0314 00:58:35.007925   66232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:35.025328   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:35.038637   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:35.038656   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:35.038709   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:35.050807   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:35.050869   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:35.063219   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:35.075855   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:35.075920   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:35.085699   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.095334   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:35.095380   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:35.105241   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:35.115726   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:35.115792   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:35.125426   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:35.135277   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:35.258033   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.100884   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.354746   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.473996   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:36.579335   66232 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:36.579424   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.079896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:37.579976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:38.079765   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:35.336276   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:37.336541   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:36.312235   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:36.312720   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:36.312746   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:36.312680   67191 retry.go:31] will retry after 2.808090469s: waiting for machine to come up
	I0314 00:58:39.123977   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:39.124488   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:39.124538   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:39.124440   67191 retry.go:31] will retry after 2.588860378s: waiting for machine to come up
	I0314 00:58:37.037739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:39.540372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:38.579818   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.079976   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.579658   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.079585   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:40.580162   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.079979   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:41.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.079887   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:42.579730   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:43.080073   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:39.838343   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:42.335840   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:41.714544   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:41.715054   65557 main.go:141] libmachine: (embed-certs-164135) DBG | unable to find current IP address of domain embed-certs-164135 in network mk-embed-certs-164135
	I0314 00:58:41.715078   65557 main.go:141] libmachine: (embed-certs-164135) DBG | I0314 00:58:41.715008   67191 retry.go:31] will retry after 4.450032332s: waiting for machine to come up
	I0314 00:58:41.540801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:44.037483   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:43.579875   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.080058   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.579576   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.080234   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:45.579747   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.080269   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:46.579541   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.079514   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:47.580409   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:48.080337   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:44.337213   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.835872   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:46.166725   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167181   65557 main.go:141] libmachine: (embed-certs-164135) Found IP for machine: 192.168.50.72
	I0314 00:58:46.167200   65557 main.go:141] libmachine: (embed-certs-164135) Reserving static IP address...
	I0314 00:58:46.167211   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has current primary IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.167614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.167650   65557 main.go:141] libmachine: (embed-certs-164135) Reserved static IP address: 192.168.50.72
	I0314 00:58:46.167671   65557 main.go:141] libmachine: (embed-certs-164135) DBG | skip adding static IP to network mk-embed-certs-164135 - found existing host DHCP lease matching {name: "embed-certs-164135", mac: "52:54:00:58:8b:2b", ip: "192.168.50.72"}
	I0314 00:58:46.167691   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Getting to WaitForSSH function...
	I0314 00:58:46.167705   65557 main.go:141] libmachine: (embed-certs-164135) Waiting for SSH to be available...
	I0314 00:58:46.169798   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170208   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.170241   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.170374   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH client type: external
	I0314 00:58:46.170395   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa (-rw-------)
	I0314 00:58:46.170424   65557 main.go:141] libmachine: (embed-certs-164135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 00:58:46.170436   65557 main.go:141] libmachine: (embed-certs-164135) DBG | About to run SSH command:
	I0314 00:58:46.170448   65557 main.go:141] libmachine: (embed-certs-164135) DBG | exit 0
	I0314 00:58:46.298947   65557 main.go:141] libmachine: (embed-certs-164135) DBG | SSH cmd err, output: <nil>: 
	I0314 00:58:46.299260   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetConfigRaw
	I0314 00:58:46.300011   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.302213   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302573   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.302601   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.302857   65557 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/config.json ...
	I0314 00:58:46.303051   65557 machine.go:94] provisionDockerMachine start ...
	I0314 00:58:46.303073   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:46.303267   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.305543   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.305933   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.305966   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.306127   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.306278   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306414   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.306542   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.306693   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.306879   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.306892   65557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:58:46.423896   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 00:58:46.423927   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424233   65557 buildroot.go:166] provisioning hostname "embed-certs-164135"
	I0314 00:58:46.424264   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.424489   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.427579   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.428038   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.428220   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.428416   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428609   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.428790   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.428972   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.429192   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.429222   65557 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-164135 && echo "embed-certs-164135" | sudo tee /etc/hostname
	I0314 00:58:46.563737   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-164135
	
	I0314 00:58:46.563766   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.566892   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567220   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.567251   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.567453   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.567641   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567802   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.567945   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.568094   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:46.568261   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:46.568276   65557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:58:46.693410   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:58:46.693445   65557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4912/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4912/.minikube}
	I0314 00:58:46.693499   65557 buildroot.go:174] setting up certificates
	I0314 00:58:46.693511   65557 provision.go:84] configureAuth start
	I0314 00:58:46.693529   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetMachineName
	I0314 00:58:46.693870   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:46.696706   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697040   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.697071   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.697225   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.699614   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.699942   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.699973   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.700098   65557 provision.go:143] copyHostCerts
	I0314 00:58:46.700164   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem, removing ...
	I0314 00:58:46.700178   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem
	I0314 00:58:46.700232   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/ca.pem (1078 bytes)
	I0314 00:58:46.700361   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem, removing ...
	I0314 00:58:46.700377   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem
	I0314 00:58:46.700411   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/cert.pem (1123 bytes)
	I0314 00:58:46.700495   65557 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem, removing ...
	I0314 00:58:46.700505   65557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem
	I0314 00:58:46.700528   65557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4912/.minikube/key.pem (1675 bytes)
	I0314 00:58:46.700580   65557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164135 san=[127.0.0.1 192.168.50.72 embed-certs-164135 localhost minikube]
	I0314 00:58:46.821935   65557 provision.go:177] copyRemoteCerts
	I0314 00:58:46.822010   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:58:46.822046   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:46.824932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825275   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:46.825310   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:46.825512   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:46.825744   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:46.825887   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:46.826082   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:46.913839   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:58:46.943631   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 00:58:46.971617   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:58:46.999369   65557 provision.go:87] duration metric: took 305.844222ms to configureAuth
	I0314 00:58:46.999394   65557 buildroot.go:189] setting minikube options for container-runtime
	I0314 00:58:46.999570   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:58:46.999664   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.002702   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003165   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.003190   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.003438   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.003687   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.003859   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.004006   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.004146   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.004340   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.004358   65557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 00:58:47.290132   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 00:58:47.290155   65557 machine.go:97] duration metric: took 987.089694ms to provisionDockerMachine
	I0314 00:58:47.290168   65557 start.go:293] postStartSetup for "embed-certs-164135" (driver="kvm2")
	I0314 00:58:47.290182   65557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:58:47.290203   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.290511   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:58:47.290552   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.293582   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.293932   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.293962   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.294089   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.294272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.294428   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.294671   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.387339   65557 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:58:47.392557   65557 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 00:58:47.392582   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/addons for local assets ...
	I0314 00:58:47.392654   65557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4912/.minikube/files for local assets ...
	I0314 00:58:47.392748   65557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem -> 122682.pem in /etc/ssl/certs
	I0314 00:58:47.392858   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 00:58:47.404173   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:47.435222   65557 start.go:296] duration metric: took 145.038242ms for postStartSetup
	I0314 00:58:47.435269   65557 fix.go:56] duration metric: took 21.375588272s for fixHost
	I0314 00:58:47.435302   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.438631   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439032   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.439076   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.439272   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.439467   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439706   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.439850   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.440043   65557 main.go:141] libmachine: Using SSH client type: native
	I0314 00:58:47.440200   65557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0314 00:58:47.440210   65557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 00:58:47.560144   65557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710377927.541841951
	
	I0314 00:58:47.560170   65557 fix.go:216] guest clock: 1710377927.541841951
	I0314 00:58:47.560182   65557 fix.go:229] Guest: 2024-03-14 00:58:47.541841951 +0000 UTC Remote: 2024-03-14 00:58:47.435274983 +0000 UTC m=+363.148559319 (delta=106.566968ms)
	I0314 00:58:47.560225   65557 fix.go:200] guest clock delta is within tolerance: 106.566968ms
	I0314 00:58:47.560232   65557 start.go:83] releasing machines lock for "embed-certs-164135", held for 21.500586263s
	I0314 00:58:47.560259   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.560524   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:47.563578   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.563984   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.564007   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.564165   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564627   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564837   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:58:47.564919   65557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:58:47.564973   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.565070   65557 ssh_runner.go:195] Run: cat /version.json
	I0314 00:58:47.565097   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:58:47.567831   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568013   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568257   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568284   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568398   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:47.568422   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:47.568432   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568625   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568630   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:58:47.568821   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.568824   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:58:47.568927   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.568980   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:58:47.569131   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:58:47.652798   65557 ssh_runner.go:195] Run: systemctl --version
	I0314 00:58:47.689415   65557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 00:58:47.842567   65557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 00:58:47.849511   65557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 00:58:47.849574   65557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:58:47.868424   65557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 00:58:47.868448   65557 start.go:494] detecting cgroup driver to use...
	I0314 00:58:47.868509   65557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 00:58:47.887449   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 00:58:47.902382   65557 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:58:47.902442   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:58:47.916938   65557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:58:47.932214   65557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:58:48.055437   65557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:58:48.233856   65557 docker.go:233] disabling docker service ...
	I0314 00:58:48.233932   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:58:48.250632   65557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:58:48.265181   65557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:58:48.397526   65557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:58:48.539003   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:58:48.555791   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:58:48.576760   65557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 00:58:48.576812   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.589305   65557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 00:58:48.589410   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.602952   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.614619   65557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 00:58:48.626026   65557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:58:48.637921   65557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:58:48.648336   65557 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 00:58:48.648397   65557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 00:58:48.663603   65557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:58:48.674731   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:48.804506   65557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 00:58:48.949960   65557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 00:58:48.950037   65557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 00:58:48.955185   65557 start.go:562] Will wait 60s for crictl version
	I0314 00:58:48.955248   65557 ssh_runner.go:195] Run: which crictl
	I0314 00:58:48.959205   65557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:58:48.998285   65557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 00:58:48.998378   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.028352   65557 ssh_runner.go:195] Run: crio --version
	I0314 00:58:49.061493   65557 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 00:58:49.062817   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetIP
	I0314 00:58:49.065664   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066015   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:58:49.066042   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:58:49.066240   65557 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 00:58:49.071178   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:49.085832   65557 kubeadm.go:877] updating cluster {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:58:49.086050   65557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 00:58:49.086127   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:49.127181   65557 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 00:58:49.127258   65557 ssh_runner.go:195] Run: which lz4
	I0314 00:58:49.131578   65557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 00:58:49.136474   65557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 00:58:49.136504   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 00:58:46.038840   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.540509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:48.579595   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.079898   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.580139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.079945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:50.579977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.079981   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:51.580391   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.080057   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:52.579968   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:53.080503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:49.336251   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:51.841160   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:50.939606   65557 crio.go:444] duration metric: took 1.808075483s to copy over tarball
	I0314 00:58:50.939682   65557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 00:58:53.536072   65557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596358521s)
	I0314 00:58:53.536109   65557 crio.go:451] duration metric: took 2.596476827s to extract the tarball
	I0314 00:58:53.536119   65557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 00:58:53.579265   65557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:58:53.626350   65557 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 00:58:53.626371   65557 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:58:53.626378   65557 kubeadm.go:928] updating node { 192.168.50.72 8443 v1.28.4 crio true true} ...
	I0314 00:58:53.626500   65557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-164135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:58:53.626586   65557 ssh_runner.go:195] Run: crio config
	I0314 00:58:53.679923   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:58:53.679946   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:58:53.679958   65557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:58:53.679976   65557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164135 NodeName:embed-certs-164135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:58:53.680104   65557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-164135"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:58:53.680163   65557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:58:53.690891   65557 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:58:53.690972   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:58:53.701173   65557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:58:53.719020   65557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:58:53.737828   65557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0314 00:58:53.756425   65557 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0314 00:58:53.760294   65557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:58:53.773705   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:58:53.892346   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:58:53.910603   65557 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135 for IP: 192.168.50.72
	I0314 00:58:53.910627   65557 certs.go:194] generating shared ca certs ...
	I0314 00:58:53.910647   65557 certs.go:226] acquiring lock for ca certs: {Name:mkbdff29b987e0736a4e1c4659d995418ae18da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:58:53.910827   65557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key
	I0314 00:58:53.910871   65557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key
	I0314 00:58:53.910880   65557 certs.go:256] generating profile certs ...
	I0314 00:58:53.910979   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/client.key
	I0314 00:58:53.911031   65557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key.e2917335
	I0314 00:58:53.911064   65557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key
	I0314 00:58:53.911166   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem (1338 bytes)
	W0314 00:58:53.911192   65557 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268_empty.pem, impossibly tiny 0 bytes
	I0314 00:58:53.911239   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:58:53.911262   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:58:53.911282   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:58:53.911306   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/certs/key.pem (1675 bytes)
	I0314 00:58:53.911340   65557 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem (1708 bytes)
	I0314 00:58:53.911957   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:58:53.966930   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 00:58:54.004054   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:58:54.052130   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 00:58:54.079203   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 00:58:54.120151   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:58:54.148078   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:58:54.176982   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/embed-certs-164135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 00:58:54.205291   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/ssl/certs/122682.pem --> /usr/share/ca-certificates/122682.pem (1708 bytes)
	I0314 00:58:54.231890   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:58:54.258106   65557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4912/.minikube/certs/12268.pem --> /usr/share/ca-certificates/12268.pem (1338 bytes)
	I0314 00:58:54.284561   65557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:58:54.303013   65557 ssh_runner.go:195] Run: openssl version
	I0314 00:58:54.309043   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12268.pem && ln -fs /usr/share/ca-certificates/12268.pem /etc/ssl/certs/12268.pem"
	I0314 00:58:54.320237   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325350   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 13 23:36 /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.325394   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12268.pem
	I0314 00:58:54.331618   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12268.pem /etc/ssl/certs/51391683.0"
	I0314 00:58:51.037616   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.039388   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:53.579463   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.080043   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.080165   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:55.580029   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.079980   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.580014   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.080139   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.580122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:58.080405   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:54.335226   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:56.841123   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:54.343570   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122682.pem && ln -fs /usr/share/ca-certificates/122682.pem /etc/ssl/certs/122682.pem"
	I0314 00:58:54.542451   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547508   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 13 23:36 /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.547561   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122682.pem
	I0314 00:58:54.553553   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122682.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 00:58:54.565071   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:58:54.577055   65557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582453   65557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.582503   65557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:58:54.588916   65557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:58:54.601405   65557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:58:54.606092   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 00:58:54.612639   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 00:58:54.619071   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 00:58:54.625702   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 00:58:54.631739   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 00:58:54.637769   65557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 00:58:54.644061   65557 kubeadm.go:391] StartCluster: {Name:embed-certs-164135 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-164135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:58:54.644158   65557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 00:58:54.644207   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.683466   65557 cri.go:89] found id: ""
	I0314 00:58:54.683537   65557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 00:58:54.695034   65557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 00:58:54.695056   65557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 00:58:54.695062   65557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 00:58:54.695122   65557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 00:58:54.706010   65557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:58:54.707111   65557 kubeconfig.go:125] found "embed-certs-164135" server: "https://192.168.50.72:8443"
	I0314 00:58:54.709121   65557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 00:58:54.722953   65557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.72
	I0314 00:58:54.722994   65557 kubeadm.go:1153] stopping kube-system containers ...
	I0314 00:58:54.723009   65557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 00:58:54.723100   65557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:58:54.787268   65557 cri.go:89] found id: ""
	I0314 00:58:54.787345   65557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 00:58:54.816753   65557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:58:54.828303   65557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:58:54.828333   65557 kubeadm.go:156] found existing configuration files:
	
	I0314 00:58:54.828385   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:58:54.841953   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:58:54.842070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:58:54.854072   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:58:54.867993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:58:54.868062   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:58:54.878707   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.888993   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:58:54.889070   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:58:54.899214   65557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:58:54.909228   65557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:58:54.909279   65557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:58:54.920066   65557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:58:54.931094   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.052967   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.727704   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:55.951743   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.038342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:58:56.138332   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:58:56.138421   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:56.639433   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.138622   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:57.167124   65557 api_server.go:72] duration metric: took 1.028792267s to wait for apiserver process to appear ...
	I0314 00:58:57.167147   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:58:57.167168   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:57.167606   65557 api_server.go:269] stopped: https://192.168.50.72:8443/healthz: Get "https://192.168.50.72:8443/healthz": dial tcp 192.168.50.72:8443: connect: connection refused
	I0314 00:58:57.668020   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:58:55.579569   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:58:58.039695   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.039862   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:00.321979   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.322014   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.322033   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.354801   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 00:59:00.354829   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 00:59:00.668268   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:00.673345   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:00.673375   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.167291   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.172646   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 00:59:01.172674   65557 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 00:59:01.667928   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 00:59:01.675916   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 00:59:01.684834   65557 api_server.go:141] control plane version: v1.28.4
	I0314 00:59:01.684866   65557 api_server.go:131] duration metric: took 4.517711081s to wait for apiserver health ...
	I0314 00:59:01.684877   65557 cni.go:84] Creating CNI manager for ""
	I0314 00:59:01.684886   65557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 00:59:01.687151   65557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 00:58:58.580011   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.079610   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:58:59.579674   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.079861   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:00.579713   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.580027   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.079793   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:02.579549   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.080040   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:01.688950   65557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 00:59:01.730963   65557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 00:59:01.777163   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:59:01.788546   65557 system_pods.go:59] 8 kube-system pods found
	I0314 00:59:01.788590   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 00:59:01.788602   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 00:59:01.788614   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 00:59:01.788626   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 00:59:01.788641   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 00:59:01.788650   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 00:59:01.788662   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 00:59:01.788681   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 00:59:01.788692   65557 system_pods.go:74] duration metric: took 11.509392ms to wait for pod list to return data ...
	I0314 00:59:01.788701   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:59:01.795122   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 00:59:01.795147   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 00:59:01.795157   65557 node_conditions.go:105] duration metric: took 6.44942ms to run NodePressure ...
	I0314 00:59:01.795172   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 00:59:02.044317   65557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050019   65557 kubeadm.go:733] kubelet initialised
	I0314 00:59:02.050040   65557 kubeadm.go:734] duration metric: took 5.70331ms waiting for restarted kubelet to initialise ...
	I0314 00:59:02.050049   65557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:02.056678   65557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.061780   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061803   65557 pod_ready.go:81] duration metric: took 5.104116ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.061811   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.061817   65557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.067102   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067123   65557 pod_ready.go:81] duration metric: took 5.298132ms for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.067134   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "etcd-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.067142   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.072079   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072097   65557 pod_ready.go:81] duration metric: took 4.946567ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.072105   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.072110   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.181781   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181814   65557 pod_ready.go:81] duration metric: took 109.687713ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.181827   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.181835   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.581700   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581726   65557 pod_ready.go:81] duration metric: took 399.880012ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.581734   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-proxy-wjz6d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.581741   65557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:02.981386   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981415   65557 pod_ready.go:81] duration metric: took 399.66708ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:02.981428   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:02.981434   65557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:03.381927   65557 pod_ready.go:97] node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381964   65557 pod_ready.go:81] duration metric: took 400.519247ms for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 00:59:03.381976   65557 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-164135" hosting pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:03.381986   65557 pod_ready.go:38] duration metric: took 1.331926826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:03.382007   65557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:59:03.397550   65557 ops.go:34] apiserver oom_adj: -16
	I0314 00:59:03.397571   65557 kubeadm.go:591] duration metric: took 8.702501848s to restartPrimaryControlPlane
	I0314 00:59:03.397583   65557 kubeadm.go:393] duration metric: took 8.753529728s to StartCluster
	I0314 00:59:03.397601   65557 settings.go:142] acquiring lock: {Name:mkb0323878dd066b115f2db508bd44d619a61f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.397687   65557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:59:03.399793   65557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4912/kubeconfig: {Name:mkb99729da10d4528f00764b6f1d1ffeb9bb113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:59:03.400058   65557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 00:59:03.402113   65557 out.go:177] * Verifying Kubernetes components...
	I0314 00:59:03.400139   65557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 00:59:03.400293   65557 config.go:182] Loaded profile config "embed-certs-164135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:59:03.403722   65557 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-164135"
	I0314 00:59:03.403746   65557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:59:03.403773   65557 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-164135"
	W0314 00:59:03.403788   65557 addons.go:243] addon storage-provisioner should already be in state true
	I0314 00:59:03.403822   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403725   65557 addons.go:69] Setting metrics-server=true in profile "embed-certs-164135"
	I0314 00:59:03.403888   65557 addons.go:234] Setting addon metrics-server=true in "embed-certs-164135"
	W0314 00:59:03.403922   65557 addons.go:243] addon metrics-server should already be in state true
	I0314 00:59:03.403727   65557 addons.go:69] Setting default-storageclass=true in profile "embed-certs-164135"
	I0314 00:59:03.403960   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.403978   65557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-164135"
	I0314 00:59:03.404257   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404295   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404316   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404332   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.404355   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.404387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.420268   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0314 00:59:03.420835   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.421449   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.421474   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.421817   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.421860   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0314 00:59:03.422393   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.422414   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.422447   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.422893   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.422917   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.423232   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.423387   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.423804   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0314 00:59:03.424136   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.424718   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.424737   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.426912   65557 addons.go:234] Setting addon default-storageclass=true in "embed-certs-164135"
	W0314 00:59:03.426935   65557 addons.go:243] addon default-storageclass should already be in state true
	I0314 00:59:03.426962   65557 host.go:66] Checking if "embed-certs-164135" exists ...
	I0314 00:59:03.427356   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.427387   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.427586   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.428046   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.428077   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.440982   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0314 00:59:03.441492   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.442055   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.442077   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.442569   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0314 00:59:03.442608   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.442838   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.443084   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.443708   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.443729   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.444112   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.444150   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0314 00:59:03.444307   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.444598   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.444915   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445374   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.445408   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.448170   65557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:59:03.445928   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.445963   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.449754   65557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.448952   65557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:59:03.449778   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:59:03.451092   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.451092   65557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 00:58:59.336088   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:01.338156   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.452582   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:59:03.451157   65557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:59:03.452695   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:59:03.452720   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.454750   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455252   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.455282   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.455410   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.455600   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.455777   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.455944   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.455989   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456439   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.456477   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.456710   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.456869   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.457034   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.457226   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.469815   65557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0314 00:59:03.470353   65557 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:59:03.470873   65557 main.go:141] libmachine: Using API Version  1
	I0314 00:59:03.470895   65557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:59:03.471166   65557 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:59:03.471370   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetState
	I0314 00:59:03.472977   65557 main.go:141] libmachine: (embed-certs-164135) Calling .DriverName
	I0314 00:59:03.473244   65557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.473258   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:59:03.473271   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHHostname
	I0314 00:59:03.476223   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476682   65557 main.go:141] libmachine: (embed-certs-164135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8b:2b", ip: ""} in network mk-embed-certs-164135: {Iface:virbr2 ExpiryTime:2024-03-14 01:58:38 +0000 UTC Type:0 Mac:52:54:00:58:8b:2b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:embed-certs-164135 Clientid:01:52:54:00:58:8b:2b}
	I0314 00:59:03.476709   65557 main.go:141] libmachine: (embed-certs-164135) DBG | domain embed-certs-164135 has defined IP address 192.168.50.72 and MAC address 52:54:00:58:8b:2b in network mk-embed-certs-164135
	I0314 00:59:03.476857   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHPort
	I0314 00:59:03.477040   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHKeyPath
	I0314 00:59:03.477171   65557 main.go:141] libmachine: (embed-certs-164135) Calling .GetSSHUsername
	I0314 00:59:03.477302   65557 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/embed-certs-164135/id_rsa Username:docker}
	I0314 00:59:03.616718   65557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:59:03.634198   65557 node_ready.go:35] waiting up to 6m0s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:03.716113   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:59:03.749507   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:59:03.749536   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 00:59:03.755619   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:59:03.790208   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:59:03.790231   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:59:03.846087   65557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:03.846118   65557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:59:03.892534   65557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:59:04.977315   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.221655296s)
	I0314 00:59:04.977372   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977386   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977433   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261285831s)
	I0314 00:59:04.977471   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977481   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977698   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.977722   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.977731   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.977738   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.977783   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.977705   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978033   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978067   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.978803   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.978822   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.978842   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.978883   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.980542   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.980629   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.980683   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:04.985502   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:04.985521   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:04.985822   65557 main.go:141] libmachine: (embed-certs-164135) DBG | Closing plugin on server side
	I0314 00:59:04.985854   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:04.985862   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.071684   65557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.179091576s)
	I0314 00:59:05.071736   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.071751   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072016   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072040   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072050   65557 main.go:141] libmachine: Making call to close driver server
	I0314 00:59:05.072057   65557 main.go:141] libmachine: (embed-certs-164135) Calling .Close
	I0314 00:59:05.072248   65557 main.go:141] libmachine: Successfully made call to close driver server
	I0314 00:59:05.072260   65557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 00:59:05.072271   65557 addons.go:470] Verifying addon metrics-server=true in "embed-certs-164135"
	I0314 00:59:05.074420   65557 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 00:59:02.537641   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:04.539777   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:03.580280   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.079957   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:04.580070   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.079965   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:05.580193   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.079657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:06.580026   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.080460   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:07.579573   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:08.079458   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:03.836267   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.837427   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:07.838129   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:05.075856   65557 addons.go:505] duration metric: took 1.675722032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 00:59:05.639116   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:08.138282   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:07.039088   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:09.538790   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:08.579872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.080006   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.579949   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.079511   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:10.579616   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.080003   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:11.580335   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.079830   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:12.579519   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:13.080004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:09.839624   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:12.335977   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:10.138471   65557 node_ready.go:53] node "embed-certs-164135" has status "Ready":"False"
	I0314 00:59:11.138534   65557 node_ready.go:49] node "embed-certs-164135" has status "Ready":"True"
	I0314 00:59:11.138572   65557 node_ready.go:38] duration metric: took 7.504341185s for node "embed-certs-164135" to be "Ready" ...
	I0314 00:59:11.138593   65557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:59:11.145002   65557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150712   65557 pod_ready.go:92] pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:11.150735   65557 pod_ready.go:81] duration metric: took 5.69376ms for pod "coredns-5dd5756b68-r2dml" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:11.150743   65557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:13.157122   65557 pod_ready.go:102] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:11.539006   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:14.038372   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:13.580021   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.079972   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.580562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.079973   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:15.580183   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.080442   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:16.580265   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.079726   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:17.580004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:18.080000   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:14.336576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.836200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:15.158112   65557 pod_ready.go:92] pod "etcd-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.158134   65557 pod_ready.go:81] duration metric: took 4.0073854s for pod "etcd-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.158143   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164046   65557 pod_ready.go:92] pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.164066   65557 pod_ready.go:81] duration metric: took 5.916933ms for pod "kube-apiserver-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.164075   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172381   65557 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.172400   65557 pod_ready.go:81] duration metric: took 8.319741ms for pod "kube-controller-manager-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.172408   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178027   65557 pod_ready.go:92] pod "kube-proxy-wjz6d" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.178047   65557 pod_ready.go:81] duration metric: took 5.632365ms for pod "kube-proxy-wjz6d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.178066   65557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185425   65557 pod_ready.go:92] pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace has status "Ready":"True"
	I0314 00:59:15.185445   65557 pod_ready.go:81] duration metric: took 7.370111ms for pod "kube-scheduler-embed-certs-164135" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:15.185455   65557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	I0314 00:59:17.191963   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:19.198718   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:16.537469   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.537882   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:18.580382   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.079467   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.579813   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.080492   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:20.580051   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.079982   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:21.579462   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.079943   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:22.579753   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:19.336004   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.835829   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:21.694213   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:24.192099   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:20.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.038355   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:23.579609   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.080429   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:24.579806   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.079568   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:25.580411   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.079986   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:26.580297   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.079547   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:27.579543   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:28.080116   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:23.837356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.844148   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.336761   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:26.193550   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.693261   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:25.537801   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.038015   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:28.580503   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.079562   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:29.579984   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.079977   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.579657   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.080002   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:31.580430   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.079709   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:32.579764   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:33.079717   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:30.835476   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.335371   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:31.192779   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.194092   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:30.537951   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:32.538810   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.038186   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:33.579468   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.079959   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:34.579891   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.079953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:35.579666   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.080471   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:36.580528   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:36.580620   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:36.628794   66232 cri.go:89] found id: ""
	I0314 00:59:36.628825   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.628836   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:36.628844   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:36.628903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:36.665474   66232 cri.go:89] found id: ""
	I0314 00:59:36.665504   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.665514   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:36.665521   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:36.665612   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:36.703404   66232 cri.go:89] found id: ""
	I0314 00:59:36.703436   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.703443   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:36.703449   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:36.703515   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:36.739602   66232 cri.go:89] found id: ""
	I0314 00:59:36.739629   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.739636   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:36.739642   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:36.739698   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:36.777836   66232 cri.go:89] found id: ""
	I0314 00:59:36.777862   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.777869   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:36.777875   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:36.777921   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:36.817211   66232 cri.go:89] found id: ""
	I0314 00:59:36.817254   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.817264   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:36.817271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:36.817320   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:36.855890   66232 cri.go:89] found id: ""
	I0314 00:59:36.855924   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.855943   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:36.855951   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:36.856007   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:36.894333   66232 cri.go:89] found id: ""
	I0314 00:59:36.894360   66232 logs.go:276] 0 containers: []
	W0314 00:59:36.894371   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:36.894391   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:36.894406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:36.909757   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:36.909796   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:37.039754   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:37.039774   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:37.039785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:37.100601   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:37.100635   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:37.143950   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:37.143976   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:35.837374   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:38.335068   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:35.692269   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.692333   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:37.538270   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.039124   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:39.696850   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:39.720410   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:39.720480   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:39.759574   66232 cri.go:89] found id: ""
	I0314 00:59:39.759624   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.759635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:39.759643   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:39.759719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:39.802990   66232 cri.go:89] found id: ""
	I0314 00:59:39.803013   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.803021   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:39.803026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:39.803090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:39.850691   66232 cri.go:89] found id: ""
	I0314 00:59:39.850718   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.850729   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:39.850736   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:39.850831   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:39.890748   66232 cri.go:89] found id: ""
	I0314 00:59:39.890796   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.890806   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:39.890813   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:39.890871   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:39.929333   66232 cri.go:89] found id: ""
	I0314 00:59:39.929361   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.929368   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:39.929374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:39.929428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:39.969207   66232 cri.go:89] found id: ""
	I0314 00:59:39.969241   66232 logs.go:276] 0 containers: []
	W0314 00:59:39.969248   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:39.969254   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:39.969328   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.006207   66232 cri.go:89] found id: ""
	I0314 00:59:40.006241   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.006252   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:40.006260   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:40.006343   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:40.047357   66232 cri.go:89] found id: ""
	I0314 00:59:40.047384   66232 logs.go:276] 0 containers: []
	W0314 00:59:40.047391   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:40.047400   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:40.047418   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:40.095431   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:40.095461   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:40.151675   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:40.151710   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:40.169388   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:40.169426   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:40.252915   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:40.252941   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:40.252958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:42.828437   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:42.842753   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:42.842838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:42.881157   66232 cri.go:89] found id: ""
	I0314 00:59:42.881189   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.881200   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:42.881207   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:42.881267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:42.921364   66232 cri.go:89] found id: ""
	I0314 00:59:42.921393   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.921405   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:42.921412   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:42.921477   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:42.956622   66232 cri.go:89] found id: ""
	I0314 00:59:42.956647   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.956655   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:42.956660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:42.956705   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:42.994476   66232 cri.go:89] found id: ""
	I0314 00:59:42.994502   66232 logs.go:276] 0 containers: []
	W0314 00:59:42.994514   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:42.994521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:42.994580   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:43.032061   66232 cri.go:89] found id: ""
	I0314 00:59:43.032089   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.032099   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:43.032106   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:43.032177   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:43.073398   66232 cri.go:89] found id: ""
	I0314 00:59:43.073427   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.073444   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:43.073452   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:43.073527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:40.336003   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.336136   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:40.192758   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.193411   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:42.538036   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:45.038933   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:43.111407   66232 cri.go:89] found id: ""
	I0314 00:59:43.111891   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.111902   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:43.111909   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:43.111988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:43.154347   66232 cri.go:89] found id: ""
	I0314 00:59:43.154374   66232 logs.go:276] 0 containers: []
	W0314 00:59:43.154384   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:43.154393   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:43.154422   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:43.202605   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:43.202636   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:43.257108   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:43.257143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:43.273252   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:43.273282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:43.347646   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:43.347671   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:43.347687   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:45.920045   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:45.934299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:45.934379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:45.973556   66232 cri.go:89] found id: ""
	I0314 00:59:45.973588   66232 logs.go:276] 0 containers: []
	W0314 00:59:45.973599   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:45.973607   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:45.973668   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:46.012623   66232 cri.go:89] found id: ""
	I0314 00:59:46.012653   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.012660   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:46.012667   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:46.012720   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:46.052290   66232 cri.go:89] found id: ""
	I0314 00:59:46.052318   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.052328   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:46.052336   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:46.052401   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:46.089098   66232 cri.go:89] found id: ""
	I0314 00:59:46.089129   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.089139   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:46.089147   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:46.089207   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:46.149733   66232 cri.go:89] found id: ""
	I0314 00:59:46.149768   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.149778   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:46.149787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:46.149856   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:46.210517   66232 cri.go:89] found id: ""
	I0314 00:59:46.210548   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.210555   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:46.210563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:46.210631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:46.275257   66232 cri.go:89] found id: ""
	I0314 00:59:46.275288   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.275299   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:46.275307   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:46.275373   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:46.319784   66232 cri.go:89] found id: ""
	I0314 00:59:46.319808   66232 logs.go:276] 0 containers: []
	W0314 00:59:46.319819   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:46.319829   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:46.319843   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:46.366285   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:46.366319   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:46.423978   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:46.424015   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:46.438508   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:46.438535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:46.509518   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:46.509538   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:46.509552   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:44.337116   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:46.341237   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:44.698272   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.192460   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.193298   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:47.537766   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.541370   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:49.089210   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:49.105225   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:49.105298   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:49.146293   66232 cri.go:89] found id: ""
	I0314 00:59:49.146319   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.146326   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:49.146331   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:49.146377   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:49.190814   66232 cri.go:89] found id: ""
	I0314 00:59:49.190838   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.190847   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:49.190854   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:49.190910   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:49.230181   66232 cri.go:89] found id: ""
	I0314 00:59:49.230206   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.230214   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:49.230219   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:49.230267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:49.268437   66232 cri.go:89] found id: ""
	I0314 00:59:49.268468   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.268479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:49.268486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:49.268547   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:49.306838   66232 cri.go:89] found id: ""
	I0314 00:59:49.306869   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.306877   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:49.306883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:49.306944   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:49.348907   66232 cri.go:89] found id: ""
	I0314 00:59:49.348937   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.348948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:49.348956   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:49.349014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:49.391993   66232 cri.go:89] found id: ""
	I0314 00:59:49.392017   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.392025   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:49.392030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:49.392133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:49.433957   66232 cri.go:89] found id: ""
	I0314 00:59:49.433988   66232 logs.go:276] 0 containers: []
	W0314 00:59:49.434000   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:49.434011   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:49.434026   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:49.490808   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:49.490846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:49.506203   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:49.506231   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:49.596998   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:49.597017   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:49.597034   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:49.683358   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:49.683396   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.230217   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:52.243787   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:52.243845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:52.284399   66232 cri.go:89] found id: ""
	I0314 00:59:52.284424   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.284434   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:52.284441   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:52.284486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:52.319413   66232 cri.go:89] found id: ""
	I0314 00:59:52.319439   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.319450   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:52.319457   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:52.319517   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:52.355774   66232 cri.go:89] found id: ""
	I0314 00:59:52.355804   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.355812   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:52.355818   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:52.355873   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:52.393420   66232 cri.go:89] found id: ""
	I0314 00:59:52.393445   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.393453   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:52.393459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:52.393562   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:52.435598   66232 cri.go:89] found id: ""
	I0314 00:59:52.435627   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.435637   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:52.435646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:52.435700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:52.478202   66232 cri.go:89] found id: ""
	I0314 00:59:52.478230   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.478241   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:52.478250   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:52.478300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:52.515135   66232 cri.go:89] found id: ""
	I0314 00:59:52.515165   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.515176   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:52.515185   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:52.515251   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:52.553094   66232 cri.go:89] found id: ""
	I0314 00:59:52.553126   66232 logs.go:276] 0 containers: []
	W0314 00:59:52.553143   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:52.553150   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:52.553174   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:52.568538   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:52.568565   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:52.643136   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:52.643164   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:52.643180   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:52.729674   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:52.729708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:52.778312   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:52.778343   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:48.837200   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.336514   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.338910   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:51.693709   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:53.694241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:52.037993   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:54.038771   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:55.333953   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:55.348232   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:55.348292   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:55.386488   66232 cri.go:89] found id: ""
	I0314 00:59:55.386517   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.386526   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:55.386534   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:55.386597   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:55.428706   66232 cri.go:89] found id: ""
	I0314 00:59:55.428737   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.428748   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:55.428755   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:55.428820   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:55.465448   66232 cri.go:89] found id: ""
	I0314 00:59:55.465478   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.465489   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:55.465495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:55.465558   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:55.503442   66232 cri.go:89] found id: ""
	I0314 00:59:55.503469   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.503479   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:55.503487   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:55.503582   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:55.542098   66232 cri.go:89] found id: ""
	I0314 00:59:55.542127   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.542137   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:55.542145   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:55.542209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:55.580298   66232 cri.go:89] found id: ""
	I0314 00:59:55.580321   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.580329   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:55.580335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:55.580405   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:55.625460   66232 cri.go:89] found id: ""
	I0314 00:59:55.625482   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.625489   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:55.625495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:55.625544   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:55.663273   66232 cri.go:89] found id: ""
	I0314 00:59:55.663301   66232 logs.go:276] 0 containers: []
	W0314 00:59:55.663316   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:55.663327   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:55.663373   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:55.680020   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:55.680047   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:55.764504   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:55.764523   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:55.764537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:55.842804   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:55.842837   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 00:59:55.889505   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:55.889540   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:55.836332   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.335436   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.193387   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.692808   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:56.045666   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.538405   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 00:59:58.445178   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:59:58.459321   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 00:59:58.459397   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 00:59:58.498338   66232 cri.go:89] found id: ""
	I0314 00:59:58.498362   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.498369   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 00:59:58.498374   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 00:59:58.498422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 00:59:58.536406   66232 cri.go:89] found id: ""
	I0314 00:59:58.536434   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.536444   66232 logs.go:278] No container was found matching "etcd"
	I0314 00:59:58.536451   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 00:59:58.536509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 00:59:58.574902   66232 cri.go:89] found id: ""
	I0314 00:59:58.574930   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.574937   66232 logs.go:278] No container was found matching "coredns"
	I0314 00:59:58.574943   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 00:59:58.574988   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 00:59:58.613132   66232 cri.go:89] found id: ""
	I0314 00:59:58.613154   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.613162   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 00:59:58.613167   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 00:59:58.613211   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 00:59:58.651052   66232 cri.go:89] found id: ""
	I0314 00:59:58.651076   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.651085   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 00:59:58.651104   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 00:59:58.651170   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 00:59:58.686347   66232 cri.go:89] found id: ""
	I0314 00:59:58.686375   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.686385   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 00:59:58.686393   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 00:59:58.686443   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 00:59:58.725992   66232 cri.go:89] found id: ""
	I0314 00:59:58.726021   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.726030   66232 logs.go:278] No container was found matching "kindnet"
	I0314 00:59:58.726037   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 00:59:58.726113   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 00:59:58.764130   66232 cri.go:89] found id: ""
	I0314 00:59:58.764153   66232 logs.go:276] 0 containers: []
	W0314 00:59:58.764161   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 00:59:58.764169   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 00:59:58.764181   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 00:59:58.816153   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 00:59:58.816195   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 00:59:58.831675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 00:59:58.831703   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 00:59:58.912867   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 00:59:58.912890   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 00:59:58.912902   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 00:59:59.000502   66232 logs.go:123] Gathering logs for container status ...
	I0314 00:59:59.000537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:01.544701   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:01.561114   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:01.561192   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:01.603886   66232 cri.go:89] found id: ""
	I0314 01:00:01.603916   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.603924   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:01.603929   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:01.603989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:01.645142   66232 cri.go:89] found id: ""
	I0314 01:00:01.645174   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.645189   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:01.645196   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:01.645248   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:01.686281   66232 cri.go:89] found id: ""
	I0314 01:00:01.686317   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.686326   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:01.686332   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:01.686389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:01.729909   66232 cri.go:89] found id: ""
	I0314 01:00:01.729945   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.729955   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:01.729963   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:01.730029   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:01.773709   66232 cri.go:89] found id: ""
	I0314 01:00:01.773746   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.773754   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:01.773770   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:01.773833   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:01.813535   66232 cri.go:89] found id: ""
	I0314 01:00:01.813560   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.813568   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:01.813573   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:01.813632   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:01.855452   66232 cri.go:89] found id: ""
	I0314 01:00:01.855482   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.855493   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:01.855499   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:01.855561   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:01.892261   66232 cri.go:89] found id: ""
	I0314 01:00:01.892287   66232 logs.go:276] 0 containers: []
	W0314 01:00:01.892297   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:01.892308   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:01.892322   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:01.945227   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:01.945258   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:01.961280   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:01.961307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:02.039204   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:02.039227   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:02.039241   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:02.116966   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:02.117002   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:00.840447   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:03.335752   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.693223   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.694565   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:00.538670   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:02.539348   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.037780   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:04.659869   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:04.673750   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:04.673818   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:04.713767   66232 cri.go:89] found id: ""
	I0314 01:00:04.713802   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.713813   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:04.713820   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:04.713882   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:04.750205   66232 cri.go:89] found id: ""
	I0314 01:00:04.750240   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.750252   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:04.750259   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:04.750323   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:04.789742   66232 cri.go:89] found id: ""
	I0314 01:00:04.789770   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.789778   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:04.789784   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:04.789832   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:04.826033   66232 cri.go:89] found id: ""
	I0314 01:00:04.826071   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.826091   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:04.826099   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:04.826161   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:04.865283   66232 cri.go:89] found id: ""
	I0314 01:00:04.865320   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.865330   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:04.865339   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:04.865387   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:04.906716   66232 cri.go:89] found id: ""
	I0314 01:00:04.906745   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.906756   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:04.906774   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:04.906835   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:04.943834   66232 cri.go:89] found id: ""
	I0314 01:00:04.943867   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.943879   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:04.943887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:04.943953   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:04.986408   66232 cri.go:89] found id: ""
	I0314 01:00:04.986435   66232 logs.go:276] 0 containers: []
	W0314 01:00:04.986445   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:04.986456   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:04.986472   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.040543   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:05.040583   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:05.055657   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:05.055685   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:05.133883   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:05.133907   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:05.133921   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:05.213133   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:05.213170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:07.754533   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:07.768008   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:07.768084   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:07.807785   66232 cri.go:89] found id: ""
	I0314 01:00:07.807814   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.807823   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:07.807830   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:07.807889   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:07.847500   66232 cri.go:89] found id: ""
	I0314 01:00:07.847529   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.847539   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:07.847547   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:07.847609   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:07.886507   66232 cri.go:89] found id: ""
	I0314 01:00:07.886534   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.886557   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:07.886563   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:07.886619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:07.923881   66232 cri.go:89] found id: ""
	I0314 01:00:07.923908   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.923918   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:07.923925   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:07.923985   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:07.959149   66232 cri.go:89] found id: ""
	I0314 01:00:07.959179   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.959190   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:07.959198   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:07.959257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:07.995821   66232 cri.go:89] found id: ""
	I0314 01:00:07.995849   66232 logs.go:276] 0 containers: []
	W0314 01:00:07.995861   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:07.995869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:07.995926   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:08.033530   66232 cri.go:89] found id: ""
	I0314 01:00:08.033554   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.033561   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:08.033567   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:08.033613   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:08.069304   66232 cri.go:89] found id: ""
	I0314 01:00:08.069332   66232 logs.go:276] 0 containers: []
	W0314 01:00:08.069341   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:08.069352   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:08.069366   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:05.838145   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.336193   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:05.192544   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.193040   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.195569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:07.040795   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:09.538606   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:08.122695   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:08.122727   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:08.138439   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:08.138466   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:08.220553   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:08.220574   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:08.220586   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:08.301108   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:08.301143   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:10.858540   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:10.872473   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:10.872527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:10.911114   66232 cri.go:89] found id: ""
	I0314 01:00:10.911143   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.911154   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:10.911161   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:10.911218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:10.951647   66232 cri.go:89] found id: ""
	I0314 01:00:10.951678   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.951690   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:10.951697   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:10.951764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:10.989244   66232 cri.go:89] found id: ""
	I0314 01:00:10.989272   66232 logs.go:276] 0 containers: []
	W0314 01:00:10.989283   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:10.989291   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:10.989368   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:11.029977   66232 cri.go:89] found id: ""
	I0314 01:00:11.030004   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.030011   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:11.030017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:11.030079   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:11.067444   66232 cri.go:89] found id: ""
	I0314 01:00:11.067467   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.067474   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:11.067480   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:11.067527   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:11.104202   66232 cri.go:89] found id: ""
	I0314 01:00:11.104225   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.104233   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:11.104242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:11.104302   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:11.143323   66232 cri.go:89] found id: ""
	I0314 01:00:11.143348   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.143376   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:11.143384   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:11.143438   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:11.182568   66232 cri.go:89] found id: ""
	I0314 01:00:11.182598   66232 logs.go:276] 0 containers: []
	W0314 01:00:11.182608   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:11.182619   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:11.182640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:11.199532   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:11.199572   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:11.276697   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:11.276722   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:11.276737   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:11.362086   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:11.362121   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:11.407686   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:11.407721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:10.338610   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.835743   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:11.201752   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.692443   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:12.038010   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:14.038915   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:13.965971   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:13.981052   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:13.981124   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:14.021047   66232 cri.go:89] found id: ""
	I0314 01:00:14.021073   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.021085   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:14.021092   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:14.021150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:14.066605   66232 cri.go:89] found id: ""
	I0314 01:00:14.066632   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.066638   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:14.066644   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:14.066689   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:14.105253   66232 cri.go:89] found id: ""
	I0314 01:00:14.105281   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.105290   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:14.105299   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:14.105407   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:14.141084   66232 cri.go:89] found id: ""
	I0314 01:00:14.141116   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.141126   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:14.141133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:14.141194   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:14.177883   66232 cri.go:89] found id: ""
	I0314 01:00:14.177914   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.177924   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:14.177944   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:14.178010   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:14.217102   66232 cri.go:89] found id: ""
	I0314 01:00:14.217133   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.217144   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:14.217162   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:14.217218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:14.256624   66232 cri.go:89] found id: ""
	I0314 01:00:14.256652   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.256662   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:14.256669   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:14.256731   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:14.295330   66232 cri.go:89] found id: ""
	I0314 01:00:14.295358   66232 logs.go:276] 0 containers: []
	W0314 01:00:14.295368   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:14.295378   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:14.295395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.351898   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:14.351947   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:14.368360   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:14.368399   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:14.447629   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:14.447651   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:14.447678   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:14.536275   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:14.536307   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.079641   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:17.093657   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:17.093730   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:17.131290   66232 cri.go:89] found id: ""
	I0314 01:00:17.131318   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.131327   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:17.131333   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:17.131379   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:17.169832   66232 cri.go:89] found id: ""
	I0314 01:00:17.169864   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.169874   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:17.169882   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:17.169942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:17.206961   66232 cri.go:89] found id: ""
	I0314 01:00:17.206982   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.206989   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:17.206994   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:17.207047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:17.245675   66232 cri.go:89] found id: ""
	I0314 01:00:17.245703   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.245714   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:17.245721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:17.245776   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:17.287768   66232 cri.go:89] found id: ""
	I0314 01:00:17.287797   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.287808   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:17.287815   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:17.287881   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:17.322555   66232 cri.go:89] found id: ""
	I0314 01:00:17.322590   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.322600   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:17.322608   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:17.322669   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:17.361149   66232 cri.go:89] found id: ""
	I0314 01:00:17.361176   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.361190   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:17.361197   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:17.361255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:17.397191   66232 cri.go:89] found id: ""
	I0314 01:00:17.397218   66232 logs.go:276] 0 containers: []
	W0314 01:00:17.397227   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:17.397236   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:17.397248   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:17.412959   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:17.412988   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:17.493344   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:17.493364   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:17.493375   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:17.573531   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:17.573564   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:17.616326   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:17.616369   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:14.837070   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:17.335625   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:15.693453   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.192702   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:16.537571   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:18.537742   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.171238   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:20.186834   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:20.186890   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:20.226834   66232 cri.go:89] found id: ""
	I0314 01:00:20.226856   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.226863   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:20.226868   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:20.226916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:20.263003   66232 cri.go:89] found id: ""
	I0314 01:00:20.263032   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.263043   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:20.263052   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:20.263135   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:20.306354   66232 cri.go:89] found id: ""
	I0314 01:00:20.306378   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.306388   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:20.306397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:20.306458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:20.342460   66232 cri.go:89] found id: ""
	I0314 01:00:20.342491   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.342501   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:20.342509   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:20.342572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:20.383367   66232 cri.go:89] found id: ""
	I0314 01:00:20.383395   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.383406   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:20.383414   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:20.383474   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:20.423190   66232 cri.go:89] found id: ""
	I0314 01:00:20.423220   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.423231   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:20.423240   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:20.423296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:20.473454   66232 cri.go:89] found id: ""
	I0314 01:00:20.473501   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.473510   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:20.473518   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:20.473577   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:20.517922   66232 cri.go:89] found id: ""
	I0314 01:00:20.517954   66232 logs.go:276] 0 containers: []
	W0314 01:00:20.517964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:20.517976   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:20.517992   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:20.572023   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:20.572059   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:20.589573   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:20.589601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:20.670843   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:20.670866   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:20.670881   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:20.753165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:20.753201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:19.336013   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:21.338995   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.194020   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.194237   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:20.539631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:22.539868   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:25.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:23.299823   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:23.313303   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:23.313398   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:23.352500   66232 cri.go:89] found id: ""
	I0314 01:00:23.352531   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.352542   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:23.352550   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:23.352610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:23.391967   66232 cri.go:89] found id: ""
	I0314 01:00:23.391997   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.392005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:23.392013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:23.392078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:23.433269   66232 cri.go:89] found id: ""
	I0314 01:00:23.433303   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.433314   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:23.433324   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:23.433388   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:23.471251   66232 cri.go:89] found id: ""
	I0314 01:00:23.471278   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.471290   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:23.471297   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:23.471359   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:23.507920   66232 cri.go:89] found id: ""
	I0314 01:00:23.507952   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.507960   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:23.507966   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:23.508023   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:23.550432   66232 cri.go:89] found id: ""
	I0314 01:00:23.550464   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.550474   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:23.550483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:23.550570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:23.589750   66232 cri.go:89] found id: ""
	I0314 01:00:23.589773   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.589781   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:23.589789   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:23.589853   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:23.626135   66232 cri.go:89] found id: ""
	I0314 01:00:23.626171   66232 logs.go:276] 0 containers: []
	W0314 01:00:23.626191   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:23.626202   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:23.626217   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.681729   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:23.681763   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:23.698219   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:23.698246   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:23.773285   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:23.773309   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:23.773321   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:23.856417   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:23.856449   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.399787   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:26.414459   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:26.414525   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:26.452117   66232 cri.go:89] found id: ""
	I0314 01:00:26.452142   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.452153   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:26.452162   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:26.452223   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:26.488892   66232 cri.go:89] found id: ""
	I0314 01:00:26.488918   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.488925   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:26.488931   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:26.488980   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:26.530194   66232 cri.go:89] found id: ""
	I0314 01:00:26.530224   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.530234   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:26.530242   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:26.530307   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:26.571356   66232 cri.go:89] found id: ""
	I0314 01:00:26.571382   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.571394   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:26.571402   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:26.571469   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:26.611465   66232 cri.go:89] found id: ""
	I0314 01:00:26.611492   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.611500   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:26.611522   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:26.611572   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:26.649783   66232 cri.go:89] found id: ""
	I0314 01:00:26.649811   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.649821   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:26.649830   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:26.649894   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:26.687519   66232 cri.go:89] found id: ""
	I0314 01:00:26.687546   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.687556   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:26.687569   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:26.687631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:26.726277   66232 cri.go:89] found id: ""
	I0314 01:00:26.726311   66232 logs.go:276] 0 containers: []
	W0314 01:00:26.726322   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:26.726333   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:26.726349   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:26.743133   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:26.743162   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:26.824026   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:26.824046   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:26.824062   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:26.907032   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:26.907065   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:26.977583   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:26.977609   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:23.837152   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:26.335576   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:24.694276   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.192662   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.193302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:27.037952   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.038545   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:29.530758   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:29.546984   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:29.547050   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:29.589191   66232 cri.go:89] found id: ""
	I0314 01:00:29.589214   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.589222   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:29.589231   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:29.589294   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:29.630380   66232 cri.go:89] found id: ""
	I0314 01:00:29.630407   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.630419   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:29.630426   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:29.630488   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:29.667407   66232 cri.go:89] found id: ""
	I0314 01:00:29.667443   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.667455   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:29.667463   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:29.667524   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:29.705745   66232 cri.go:89] found id: ""
	I0314 01:00:29.705776   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.705784   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:29.705790   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:29.705851   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:29.745280   66232 cri.go:89] found id: ""
	I0314 01:00:29.745314   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.745324   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:29.745335   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:29.745390   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:29.782900   66232 cri.go:89] found id: ""
	I0314 01:00:29.782935   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.782945   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:29.782954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:29.783014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:29.825324   66232 cri.go:89] found id: ""
	I0314 01:00:29.825352   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.825363   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:29.825371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:29.825436   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:29.869433   66232 cri.go:89] found id: ""
	I0314 01:00:29.869466   66232 logs.go:276] 0 containers: []
	W0314 01:00:29.869476   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:29.869487   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:29.869502   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:29.912468   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:29.912494   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:29.965515   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:29.965555   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:29.982343   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:29.982367   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:30.057772   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:30.057797   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:30.057814   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:32.644707   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:32.667874   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:32.667950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:32.727931   66232 cri.go:89] found id: ""
	I0314 01:00:32.727960   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.727971   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:32.727979   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:32.728038   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:32.766885   66232 cri.go:89] found id: ""
	I0314 01:00:32.766911   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.766921   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:32.766929   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:32.766989   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:32.804099   66232 cri.go:89] found id: ""
	I0314 01:00:32.804128   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.804137   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:32.804143   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:32.804200   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:32.845468   66232 cri.go:89] found id: ""
	I0314 01:00:32.845498   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.845507   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:32.845516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:32.845607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:32.884350   66232 cri.go:89] found id: ""
	I0314 01:00:32.884372   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.884380   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:32.884386   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:32.884437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:32.920634   66232 cri.go:89] found id: ""
	I0314 01:00:32.920676   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.920692   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:32.920700   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:32.920756   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:32.959586   66232 cri.go:89] found id: ""
	I0314 01:00:32.959616   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.959627   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:32.959634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:32.959699   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:32.998814   66232 cri.go:89] found id: ""
	I0314 01:00:32.998854   66232 logs.go:276] 0 containers: []
	W0314 01:00:32.998865   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:32.998882   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:32.998895   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:33.054782   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:33.054813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:33.069772   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:33.069807   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:00:28.836740   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.335908   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.336613   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.692393   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:33.695343   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:31.539723   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:34.038889   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	W0314 01:00:33.153893   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:33.153913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:33.153925   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:33.234165   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:33.234197   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:35.781872   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:35.797220   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:35.797300   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:35.836749   66232 cri.go:89] found id: ""
	I0314 01:00:35.836773   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.836779   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:35.836785   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:35.836841   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:35.875754   66232 cri.go:89] found id: ""
	I0314 01:00:35.875782   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.875790   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:35.875797   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:35.875844   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:35.914337   66232 cri.go:89] found id: ""
	I0314 01:00:35.914360   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.914368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:35.914373   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:35.914428   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:35.954287   66232 cri.go:89] found id: ""
	I0314 01:00:35.954306   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.954313   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:35.954318   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:35.954365   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:35.995361   66232 cri.go:89] found id: ""
	I0314 01:00:35.995385   66232 logs.go:276] 0 containers: []
	W0314 01:00:35.995393   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:35.995398   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:35.995455   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:36.040462   66232 cri.go:89] found id: ""
	I0314 01:00:36.040488   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.040497   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:36.040503   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:36.040567   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:36.078740   66232 cri.go:89] found id: ""
	I0314 01:00:36.078786   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.078797   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:36.078814   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:36.078885   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:36.120165   66232 cri.go:89] found id: ""
	I0314 01:00:36.120193   66232 logs.go:276] 0 containers: []
	W0314 01:00:36.120203   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:36.120213   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:36.120239   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:36.136275   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:36.136312   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:36.217907   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:36.217929   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:36.217944   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:36.295177   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:36.295212   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:36.342587   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:36.342623   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:35.336966   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:37.337764   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.193887   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.693150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:36.538529   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.538996   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:38.900832   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:38.914693   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:38.914782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:38.954297   66232 cri.go:89] found id: ""
	I0314 01:00:38.954333   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.954347   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:38.954354   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:38.954414   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:38.992427   66232 cri.go:89] found id: ""
	I0314 01:00:38.992458   66232 logs.go:276] 0 containers: []
	W0314 01:00:38.992468   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:38.992474   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:38.992521   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:39.028595   66232 cri.go:89] found id: ""
	I0314 01:00:39.028629   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.028640   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:39.028647   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:39.028707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:39.064418   66232 cri.go:89] found id: ""
	I0314 01:00:39.064443   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.064450   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:39.064456   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:39.064503   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:39.101007   66232 cri.go:89] found id: ""
	I0314 01:00:39.101050   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.101060   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:39.101066   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:39.101125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:39.142913   66232 cri.go:89] found id: ""
	I0314 01:00:39.142940   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.142950   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:39.142957   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:39.143018   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:39.179957   66232 cri.go:89] found id: ""
	I0314 01:00:39.179986   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.179997   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:39.180007   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:39.180068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:39.219688   66232 cri.go:89] found id: ""
	I0314 01:00:39.219712   66232 logs.go:276] 0 containers: []
	W0314 01:00:39.219720   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:39.219730   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:39.219747   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:39.234611   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:39.234642   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:39.306760   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:39.306808   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:39.306824   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.390739   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:39.390799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:39.441782   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:39.441813   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:41.994667   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:42.008795   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:42.008865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:42.045814   66232 cri.go:89] found id: ""
	I0314 01:00:42.045839   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.045846   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:42.045852   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:42.045903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:42.085519   66232 cri.go:89] found id: ""
	I0314 01:00:42.085550   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.085563   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:42.085571   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:42.085636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:42.127334   66232 cri.go:89] found id: ""
	I0314 01:00:42.127359   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.127368   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:42.127374   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:42.127425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:42.168890   66232 cri.go:89] found id: ""
	I0314 01:00:42.168915   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.168923   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:42.168929   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:42.168990   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:42.209915   66232 cri.go:89] found id: ""
	I0314 01:00:42.209937   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.209945   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:42.209950   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:42.210005   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:42.250858   66232 cri.go:89] found id: ""
	I0314 01:00:42.250880   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.250888   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:42.250897   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:42.250952   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:42.288731   66232 cri.go:89] found id: ""
	I0314 01:00:42.288779   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.288791   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:42.288799   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:42.288854   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:42.329002   66232 cri.go:89] found id: ""
	I0314 01:00:42.329030   66232 logs.go:276] 0 containers: []
	W0314 01:00:42.329041   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:42.329052   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:42.329066   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:42.371408   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:42.371435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:42.429017   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:42.429053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:42.446217   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:42.446255   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:42.525765   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:42.525786   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:42.525798   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:39.338188   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:41.836306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.694284   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.193538   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:40.540167   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:43.039511   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.122600   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:45.137115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:45.137172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:45.177658   66232 cri.go:89] found id: ""
	I0314 01:00:45.177685   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.177693   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:45.177698   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:45.177758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:45.218191   66232 cri.go:89] found id: ""
	I0314 01:00:45.218220   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.218228   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:45.218234   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:45.218291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:45.263650   66232 cri.go:89] found id: ""
	I0314 01:00:45.263673   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.263682   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:45.263688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:45.263741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:45.299533   66232 cri.go:89] found id: ""
	I0314 01:00:45.299562   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.299573   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:45.299579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:45.299626   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:45.338985   66232 cri.go:89] found id: ""
	I0314 01:00:45.339011   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.339021   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:45.339028   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:45.339089   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:45.380178   66232 cri.go:89] found id: ""
	I0314 01:00:45.380202   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.380210   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:45.380216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:45.380272   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:45.420424   66232 cri.go:89] found id: ""
	I0314 01:00:45.420458   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.420470   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:45.420478   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:45.420540   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:45.460829   66232 cri.go:89] found id: ""
	I0314 01:00:45.460852   66232 logs.go:276] 0 containers: []
	W0314 01:00:45.460860   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:45.460870   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:45.460886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:45.516541   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:45.516578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:45.532856   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:45.532880   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:45.611749   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:45.611772   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:45.611786   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:45.693268   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:45.693297   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:43.836776   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:46.336671   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.692531   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.692748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:45.539526   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:47.542274   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.037560   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:48.240420   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:48.254985   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:48.255045   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:48.294167   66232 cri.go:89] found id: ""
	I0314 01:00:48.294190   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.294198   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:48.294204   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:48.294265   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:48.331189   66232 cri.go:89] found id: ""
	I0314 01:00:48.331214   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.331223   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:48.331231   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:48.331291   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:48.367601   66232 cri.go:89] found id: ""
	I0314 01:00:48.367641   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.367652   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:48.367660   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:48.367723   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:48.405032   66232 cri.go:89] found id: ""
	I0314 01:00:48.405061   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.405072   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:48.405080   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:48.405148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:48.444641   66232 cri.go:89] found id: ""
	I0314 01:00:48.444664   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.444672   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:48.444678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:48.444737   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:48.481624   66232 cri.go:89] found id: ""
	I0314 01:00:48.481653   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.481661   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:48.481667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:48.481718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:48.518944   66232 cri.go:89] found id: ""
	I0314 01:00:48.518976   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.518984   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:48.518989   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:48.519047   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:48.558455   66232 cri.go:89] found id: ""
	I0314 01:00:48.558495   66232 logs.go:276] 0 containers: []
	W0314 01:00:48.558506   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:48.558518   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:48.558533   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.604953   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:48.604983   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:48.655766   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:48.655799   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:48.670370   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:48.670395   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:48.750567   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:48.750588   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:48.750601   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.342004   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:51.356115   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:51.356180   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:51.393740   66232 cri.go:89] found id: ""
	I0314 01:00:51.393766   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.393773   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:51.393778   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:51.393824   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:51.432939   66232 cri.go:89] found id: ""
	I0314 01:00:51.432969   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.432980   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:51.432998   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:51.433066   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:51.469309   66232 cri.go:89] found id: ""
	I0314 01:00:51.469332   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.469340   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:51.469345   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:51.469395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:51.506576   66232 cri.go:89] found id: ""
	I0314 01:00:51.506606   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.506618   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:51.506626   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:51.506687   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:51.547323   66232 cri.go:89] found id: ""
	I0314 01:00:51.547348   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.547358   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:51.547365   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:51.547422   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:51.588257   66232 cri.go:89] found id: ""
	I0314 01:00:51.588281   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.588289   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:51.588295   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:51.588353   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:51.629026   66232 cri.go:89] found id: ""
	I0314 01:00:51.629049   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.629057   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:51.629064   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:51.629116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:51.668857   66232 cri.go:89] found id: ""
	I0314 01:00:51.668890   66232 logs.go:276] 0 containers: []
	W0314 01:00:51.668903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:51.668914   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:51.668930   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:51.724282   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:51.724329   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:51.739513   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:51.739543   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:51.815089   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:51.815116   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:51.815132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:51.898576   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:51.898613   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:48.836517   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.837605   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:53.334491   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:50.192748   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.694281   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:52.038194   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.538685   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:54.441122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:54.456300   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:54.456358   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:54.492731   66232 cri.go:89] found id: ""
	I0314 01:00:54.492764   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.492776   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:54.492784   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:54.492847   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:54.530965   66232 cri.go:89] found id: ""
	I0314 01:00:54.530994   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.531005   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:54.531013   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:54.531075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:54.570440   66232 cri.go:89] found id: ""
	I0314 01:00:54.570470   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.570487   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:54.570495   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:54.570557   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:54.611569   66232 cri.go:89] found id: ""
	I0314 01:00:54.611592   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.611599   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:54.611606   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:54.611660   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:54.648383   66232 cri.go:89] found id: ""
	I0314 01:00:54.648412   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.648421   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:54.648427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:54.648476   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:54.686598   66232 cri.go:89] found id: ""
	I0314 01:00:54.686621   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.686636   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:54.686644   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:54.686701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:54.726413   66232 cri.go:89] found id: ""
	I0314 01:00:54.726436   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.726444   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:54.726450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:54.726496   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:54.764126   66232 cri.go:89] found id: ""
	I0314 01:00:54.764167   66232 logs.go:276] 0 containers: []
	W0314 01:00:54.764177   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:54.764187   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:54.764201   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:54.841584   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:54.841612   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:54.841628   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:54.929736   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:54.929770   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:54.972612   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:54.972638   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:55.038415   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:55.038443   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.553419   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:00:57.567807   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:00:57.567865   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:00:57.608042   66232 cri.go:89] found id: ""
	I0314 01:00:57.608069   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.608077   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:00:57.608082   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:00:57.608138   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:00:57.647991   66232 cri.go:89] found id: ""
	I0314 01:00:57.648022   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.648031   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:00:57.648036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:00:57.648096   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:00:57.687506   66232 cri.go:89] found id: ""
	I0314 01:00:57.687529   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.687537   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:00:57.687544   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:00:57.687603   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:00:57.726178   66232 cri.go:89] found id: ""
	I0314 01:00:57.726214   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.726224   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:00:57.726233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:00:57.726297   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:00:57.763847   66232 cri.go:89] found id: ""
	I0314 01:00:57.763874   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.763881   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:00:57.763887   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:00:57.763946   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:00:57.800962   66232 cri.go:89] found id: ""
	I0314 01:00:57.800990   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.801001   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:00:57.801010   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:00:57.801063   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:00:57.838942   66232 cri.go:89] found id: ""
	I0314 01:00:57.838963   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.838970   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:00:57.838975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:00:57.839021   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:00:57.875376   66232 cri.go:89] found id: ""
	I0314 01:00:57.875405   66232 logs.go:276] 0 containers: []
	W0314 01:00:57.875415   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:00:57.875424   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:00:57.875435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:00:57.917732   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:00:57.917755   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:00:57.971528   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:00:57.971561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:00:57.986854   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:00:57.986879   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:00:58.066955   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:00:58.066975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:00:58.066985   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:00:55.337356   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.836856   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:55.191933   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.193287   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.197833   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:57.039559   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:00:59.537165   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:00.655786   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:00.672026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:00.672105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:00.711128   66232 cri.go:89] found id: ""
	I0314 01:01:00.711157   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.711167   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:00.711174   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:00.711236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:00.748236   66232 cri.go:89] found id: ""
	I0314 01:01:00.748264   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.748276   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:00.748284   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:00.748347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:00.787436   66232 cri.go:89] found id: ""
	I0314 01:01:00.787470   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.787478   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:00.787486   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:00.787536   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:00.828583   66232 cri.go:89] found id: ""
	I0314 01:01:00.828605   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.828615   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:00.828623   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:00.828683   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:00.866856   66232 cri.go:89] found id: ""
	I0314 01:01:00.866885   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.866896   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:00.866903   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:00.866964   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:00.904860   66232 cri.go:89] found id: ""
	I0314 01:01:00.904883   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.904890   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:00.904895   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:00.904943   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:00.942199   66232 cri.go:89] found id: ""
	I0314 01:01:00.942232   66232 logs.go:276] 0 containers: []
	W0314 01:01:00.942243   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:00.942253   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:00.942322   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:01.003925   66232 cri.go:89] found id: ""
	I0314 01:01:01.003951   66232 logs.go:276] 0 containers: []
	W0314 01:01:01.003961   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:01.003972   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:01.003987   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:01.057875   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:01.057903   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:01.074102   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:01.074128   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:01.147570   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:01.147602   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:01.147617   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:01.229816   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:01.229846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:00.337903   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:02.836288   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.693336   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.193878   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:01.539596   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:04.037927   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:03.775990   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:03.789826   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:03.789893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:03.832595   66232 cri.go:89] found id: ""
	I0314 01:01:03.832620   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.832631   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:03.832639   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:03.832701   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:03.870895   66232 cri.go:89] found id: ""
	I0314 01:01:03.870914   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.870922   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:03.870928   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:03.870975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:03.909337   66232 cri.go:89] found id: ""
	I0314 01:01:03.909368   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.909379   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:03.909387   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:03.909447   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:03.952071   66232 cri.go:89] found id: ""
	I0314 01:01:03.952100   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.952110   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:03.952119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:03.952182   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:03.989374   66232 cri.go:89] found id: ""
	I0314 01:01:03.989403   66232 logs.go:276] 0 containers: []
	W0314 01:01:03.989413   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:03.989421   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:03.989470   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:04.027654   66232 cri.go:89] found id: ""
	I0314 01:01:04.027683   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.027693   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:04.027702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:04.027770   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:04.064870   66232 cri.go:89] found id: ""
	I0314 01:01:04.064904   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.064915   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:04.064923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:04.064978   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:04.103214   66232 cri.go:89] found id: ""
	I0314 01:01:04.103246   66232 logs.go:276] 0 containers: []
	W0314 01:01:04.103257   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:04.103268   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:04.103282   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:04.154061   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:04.154098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:04.168955   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:04.168981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:04.245214   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:04.245239   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:04.245254   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:04.321782   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:04.321822   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:06.864312   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:06.879181   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:06.879259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:06.919707   66232 cri.go:89] found id: ""
	I0314 01:01:06.919731   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.919742   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:06.919749   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:06.919809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:06.964118   66232 cri.go:89] found id: ""
	I0314 01:01:06.964154   66232 logs.go:276] 0 containers: []
	W0314 01:01:06.964165   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:06.964173   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:06.964222   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:07.005923   66232 cri.go:89] found id: ""
	I0314 01:01:07.005948   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.005955   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:07.005961   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:07.006014   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:07.048297   66232 cri.go:89] found id: ""
	I0314 01:01:07.048329   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.048336   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:07.048342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:07.048400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:07.089009   66232 cri.go:89] found id: ""
	I0314 01:01:07.089036   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.089044   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:07.089049   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:07.089108   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:07.125228   66232 cri.go:89] found id: ""
	I0314 01:01:07.125251   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.125259   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:07.125269   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:07.125329   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:07.163710   66232 cri.go:89] found id: ""
	I0314 01:01:07.163736   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.163743   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:07.163751   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:07.163797   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:07.202886   66232 cri.go:89] found id: ""
	I0314 01:01:07.202909   66232 logs.go:276] 0 containers: []
	W0314 01:01:07.202916   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:07.202924   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:07.202936   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:07.249071   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:07.249098   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:07.304923   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:07.304958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:07.319983   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:07.320011   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:07.398592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:07.398627   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:07.398640   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:05.337479   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:07.836304   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.692373   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.192747   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:06.539182   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.038291   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:09.987439   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.002348   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:10.002424   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:10.039153   66232 cri.go:89] found id: ""
	I0314 01:01:10.039173   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.039179   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:10.039185   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:10.039236   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:10.073527   66232 cri.go:89] found id: ""
	I0314 01:01:10.073557   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.073568   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:10.073575   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:10.073650   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:10.112192   66232 cri.go:89] found id: ""
	I0314 01:01:10.112213   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.112223   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:10.112230   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:10.112288   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:10.152821   66232 cri.go:89] found id: ""
	I0314 01:01:10.152848   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.152857   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:10.152862   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:10.152919   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:10.189327   66232 cri.go:89] found id: ""
	I0314 01:01:10.189352   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.189364   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:10.189371   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:10.189427   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:10.233885   66232 cri.go:89] found id: ""
	I0314 01:01:10.233909   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.233917   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:10.233923   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:10.233975   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:10.272033   66232 cri.go:89] found id: ""
	I0314 01:01:10.272061   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.272069   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:10.272075   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:10.272129   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:10.312680   66232 cri.go:89] found id: ""
	I0314 01:01:10.312706   66232 logs.go:276] 0 containers: []
	W0314 01:01:10.312717   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:10.312727   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:10.312742   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:10.327507   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:10.327537   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:10.410274   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:10.410299   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:10.410311   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:10.498686   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:10.498721   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:10.543509   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:10.543561   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.098621   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:10.335968   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:12.836293   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.692899   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.696150   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:11.538154   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.540093   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:13.114598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:13.114685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:13.169907   66232 cri.go:89] found id: ""
	I0314 01:01:13.169930   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.169937   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:13.169943   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:13.169999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:13.237394   66232 cri.go:89] found id: ""
	I0314 01:01:13.237417   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.237429   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:13.237439   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:13.237502   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:13.295227   66232 cri.go:89] found id: ""
	I0314 01:01:13.295250   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.295258   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:13.295265   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:13.295326   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:13.333351   66232 cri.go:89] found id: ""
	I0314 01:01:13.333378   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.333388   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:13.333396   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:13.333457   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:13.376480   66232 cri.go:89] found id: ""
	I0314 01:01:13.376503   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.376511   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:13.376516   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:13.376578   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:13.416746   66232 cri.go:89] found id: ""
	I0314 01:01:13.416778   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.416786   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:13.416792   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:13.416842   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:13.455971   66232 cri.go:89] found id: ""
	I0314 01:01:13.456004   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.456014   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:13.456022   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:13.456090   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:13.493921   66232 cri.go:89] found id: ""
	I0314 01:01:13.493952   66232 logs.go:276] 0 containers: []
	W0314 01:01:13.493964   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:13.493975   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:13.493994   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:13.582269   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:13.582317   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:13.627643   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:13.627675   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:13.680989   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:13.681021   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:13.696675   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:13.696708   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:13.768850   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.269385   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:16.284543   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:16.284607   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:16.322317   66232 cri.go:89] found id: ""
	I0314 01:01:16.322345   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.322356   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:16.322364   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:16.322412   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:16.362651   66232 cri.go:89] found id: ""
	I0314 01:01:16.362686   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.362697   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:16.362705   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:16.362782   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:16.403239   66232 cri.go:89] found id: ""
	I0314 01:01:16.403268   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.403276   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:16.403282   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:16.403339   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:16.442326   66232 cri.go:89] found id: ""
	I0314 01:01:16.442348   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.442355   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:16.442361   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:16.442423   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:16.480694   66232 cri.go:89] found id: ""
	I0314 01:01:16.480722   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.480733   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:16.480741   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:16.480809   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:16.521555   66232 cri.go:89] found id: ""
	I0314 01:01:16.521585   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.521596   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:16.521603   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:16.521663   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:16.564517   66232 cri.go:89] found id: ""
	I0314 01:01:16.564544   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.564555   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:16.564561   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:16.564641   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:16.602650   66232 cri.go:89] found id: ""
	I0314 01:01:16.602680   66232 logs.go:276] 0 containers: []
	W0314 01:01:16.602690   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:16.602701   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:16.602715   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:16.645742   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:16.645777   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:16.704940   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:16.704972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:16.720393   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:16.720420   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:16.799609   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:16.799640   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:16.799655   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:14.836773   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:17.336818   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.192938   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.193968   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:16.038263   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:18.538739   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:19.388482   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:19.402293   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:19.402372   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:19.439978   66232 cri.go:89] found id: ""
	I0314 01:01:19.440002   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.440025   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:19.440033   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:19.440112   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:19.475984   66232 cri.go:89] found id: ""
	I0314 01:01:19.476011   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.476019   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:19.476026   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:19.476078   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:19.512705   66232 cri.go:89] found id: ""
	I0314 01:01:19.512733   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.512742   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:19.512748   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:19.512793   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:19.552300   66232 cri.go:89] found id: ""
	I0314 01:01:19.552329   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.552339   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:19.552347   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:19.552413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:19.598630   66232 cri.go:89] found id: ""
	I0314 01:01:19.598660   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.598670   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:19.598678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:19.598741   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:19.635883   66232 cri.go:89] found id: ""
	I0314 01:01:19.635912   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.635924   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:19.635931   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:19.635991   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:19.670339   66232 cri.go:89] found id: ""
	I0314 01:01:19.670364   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.670371   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:19.670377   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:19.670430   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:19.709469   66232 cri.go:89] found id: ""
	I0314 01:01:19.709512   66232 logs.go:276] 0 containers: []
	W0314 01:01:19.709522   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:19.709533   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:19.709551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:19.782157   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:19.782181   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:19.782192   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.866496   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:19.866531   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:19.910167   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:19.910198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:19.963516   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:19.963546   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.478995   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:22.493273   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:22.493351   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:22.531559   66232 cri.go:89] found id: ""
	I0314 01:01:22.531581   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.531588   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:22.531594   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:22.531651   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:22.569478   66232 cri.go:89] found id: ""
	I0314 01:01:22.569508   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.569516   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:22.569524   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:22.569570   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:22.607573   66232 cri.go:89] found id: ""
	I0314 01:01:22.607599   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.607615   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:22.607625   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:22.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:22.644849   66232 cri.go:89] found id: ""
	I0314 01:01:22.644875   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.644885   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:22.644893   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:22.644950   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:22.683745   66232 cri.go:89] found id: ""
	I0314 01:01:22.683771   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.683779   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:22.683785   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:22.683845   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:22.723426   66232 cri.go:89] found id: ""
	I0314 01:01:22.723455   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.723462   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:22.723468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:22.723512   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:22.761814   66232 cri.go:89] found id: ""
	I0314 01:01:22.761850   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.761860   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:22.761867   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:22.761918   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:22.799649   66232 cri.go:89] found id: ""
	I0314 01:01:22.799677   66232 logs.go:276] 0 containers: []
	W0314 01:01:22.799687   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:22.799697   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:22.799707   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:22.840183   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:22.840215   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:22.893385   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:22.893416   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:22.909225   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:22.909250   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:22.982333   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:22.982353   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:22.982364   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:19.835211   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.835716   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:20.194985   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:22.692889   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:21.040809   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:23.538236   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:25.560639   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:25.575003   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:25.575082   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:25.613540   66232 cri.go:89] found id: ""
	I0314 01:01:25.613571   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.613583   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:25.613591   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:25.613653   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:25.652340   66232 cri.go:89] found id: ""
	I0314 01:01:25.652365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.652373   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:25.652379   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:25.652425   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:25.691035   66232 cri.go:89] found id: ""
	I0314 01:01:25.691070   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.691079   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:25.691087   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:25.691152   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:25.729666   66232 cri.go:89] found id: ""
	I0314 01:01:25.729695   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.729705   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:25.729713   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:25.729783   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:25.766836   66232 cri.go:89] found id: ""
	I0314 01:01:25.766863   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.766871   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:25.766877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:25.766934   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:25.813690   66232 cri.go:89] found id: ""
	I0314 01:01:25.813715   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.813727   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:25.813734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:25.813796   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:25.858630   66232 cri.go:89] found id: ""
	I0314 01:01:25.858668   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.858679   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:25.858688   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:25.858774   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:25.896340   66232 cri.go:89] found id: ""
	I0314 01:01:25.896365   66232 logs.go:276] 0 containers: []
	W0314 01:01:25.896372   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:25.896380   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:25.896392   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:25.949480   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:25.949513   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:25.965185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:25.965211   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:26.041208   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:26.041228   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:26.041243   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:26.123892   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:26.123928   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:23.839306   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.335177   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.337014   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:24.695636   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:27.193395   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:29.200714   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:26.037924   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.038831   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:28.666449   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:28.679889   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:28.679948   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:28.717183   66232 cri.go:89] found id: ""
	I0314 01:01:28.717207   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.717214   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:28.717220   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:28.717275   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:28.761049   66232 cri.go:89] found id: ""
	I0314 01:01:28.761070   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.761077   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:28.761083   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:28.761133   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:28.800429   66232 cri.go:89] found id: ""
	I0314 01:01:28.800454   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.800462   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:28.800468   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:28.800523   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:28.841757   66232 cri.go:89] found id: ""
	I0314 01:01:28.841780   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.841788   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:28.841793   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:28.841838   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:28.883658   66232 cri.go:89] found id: ""
	I0314 01:01:28.883686   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.883696   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:28.883703   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:28.883759   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:28.918811   66232 cri.go:89] found id: ""
	I0314 01:01:28.918840   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.918851   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:28.918858   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:28.918916   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:28.955088   66232 cri.go:89] found id: ""
	I0314 01:01:28.955119   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.955130   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:28.955138   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:28.955195   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:28.992865   66232 cri.go:89] found id: ""
	I0314 01:01:28.992891   66232 logs.go:276] 0 containers: []
	W0314 01:01:28.992903   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:28.992913   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:28.992931   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:29.080095   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:29.080132   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:29.127764   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:29.127789   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:29.182075   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:29.182109   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:29.198865   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:29.198891   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:29.277413   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:31.777693   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:31.792353   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:31.792426   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:31.830873   66232 cri.go:89] found id: ""
	I0314 01:01:31.830897   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.830904   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:31.830910   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:31.830955   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:31.868648   66232 cri.go:89] found id: ""
	I0314 01:01:31.868670   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.868677   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:31.868683   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:31.868733   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:31.910124   66232 cri.go:89] found id: ""
	I0314 01:01:31.910146   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.910155   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:31.910160   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:31.910209   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:31.957558   66232 cri.go:89] found id: ""
	I0314 01:01:31.957584   66232 logs.go:276] 0 containers: []
	W0314 01:01:31.957592   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:31.957598   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:31.957652   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:32.000112   66232 cri.go:89] found id: ""
	I0314 01:01:32.000139   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.000157   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:32.000165   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:32.000229   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:32.037838   66232 cri.go:89] found id: ""
	I0314 01:01:32.037865   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.037876   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:32.037888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:32.037949   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:32.076069   66232 cri.go:89] found id: ""
	I0314 01:01:32.076093   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.076101   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:32.076107   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:32.076172   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:32.114702   66232 cri.go:89] found id: ""
	I0314 01:01:32.114730   66232 logs.go:276] 0 containers: []
	W0314 01:01:32.114737   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:32.114745   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:32.114757   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:32.162043   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:32.162078   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:32.219038   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:32.219075   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:32.234331   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:32.234358   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:32.307667   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:32.307688   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:32.307700   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:30.835936   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.335575   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:31.692739   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:33.693455   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:30.537265   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:32.538754   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.037382   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:34.893945   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:34.907888   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:34.907966   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:34.944887   66232 cri.go:89] found id: ""
	I0314 01:01:34.944911   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.944919   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:34.944925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:34.944973   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:34.992937   66232 cri.go:89] found id: ""
	I0314 01:01:34.992964   66232 logs.go:276] 0 containers: []
	W0314 01:01:34.992974   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:34.992982   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:34.993040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.030147   66232 cri.go:89] found id: ""
	I0314 01:01:35.030171   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.030178   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:35.030184   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:35.030230   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:35.065966   66232 cri.go:89] found id: ""
	I0314 01:01:35.065999   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.066010   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:35.066018   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:35.066077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:35.104221   66232 cri.go:89] found id: ""
	I0314 01:01:35.104251   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.104262   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:35.104270   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:35.104347   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:35.145221   66232 cri.go:89] found id: ""
	I0314 01:01:35.145245   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.145253   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:35.145258   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:35.145313   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:35.185119   66232 cri.go:89] found id: ""
	I0314 01:01:35.185152   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.185162   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:35.185168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:35.185228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:35.228309   66232 cri.go:89] found id: ""
	I0314 01:01:35.228341   66232 logs.go:276] 0 containers: []
	W0314 01:01:35.228352   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:35.228363   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:35.228381   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:35.242185   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:35.242213   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:35.318542   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:35.318564   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:35.318578   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:35.396003   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:35.396042   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:35.437435   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:35.437464   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:37.992023   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:38.007180   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:38.007260   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:38.047871   66232 cri.go:89] found id: ""
	I0314 01:01:38.047906   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.047917   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:38.047925   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:38.047982   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:38.085359   66232 cri.go:89] found id: ""
	I0314 01:01:38.085388   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.085397   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:38.085404   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:38.085462   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:35.336258   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.835151   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:35.696328   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.192502   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:37.037490   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:39.038097   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:38.126190   66232 cri.go:89] found id: ""
	I0314 01:01:38.126219   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.126227   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:38.126233   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:38.126285   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:38.163163   66232 cri.go:89] found id: ""
	I0314 01:01:38.163190   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.163197   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:38.163202   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:38.163261   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:38.204338   66232 cri.go:89] found id: ""
	I0314 01:01:38.204360   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.204367   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:38.204372   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:38.204429   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:38.246252   66232 cri.go:89] found id: ""
	I0314 01:01:38.246278   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.246288   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:38.246296   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:38.246357   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:38.281173   66232 cri.go:89] found id: ""
	I0314 01:01:38.281198   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.281205   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:38.281211   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:38.281258   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:38.323744   66232 cri.go:89] found id: ""
	I0314 01:01:38.323774   66232 logs.go:276] 0 containers: []
	W0314 01:01:38.323784   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:38.323794   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:38.323808   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:38.377987   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:38.378020   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:38.392879   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:38.392904   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:38.479475   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:38.479501   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:38.479515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:38.563409   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:38.563440   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.105122   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:41.119932   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:41.119997   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:41.158809   66232 cri.go:89] found id: ""
	I0314 01:01:41.158837   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.158847   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:41.158854   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:41.158915   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:41.201150   66232 cri.go:89] found id: ""
	I0314 01:01:41.201175   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.201183   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:41.201189   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:41.201239   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:41.240139   66232 cri.go:89] found id: ""
	I0314 01:01:41.240165   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.240173   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:41.240178   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:41.240232   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:41.278220   66232 cri.go:89] found id: ""
	I0314 01:01:41.278249   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.278257   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:41.278262   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:41.278310   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:41.313130   66232 cri.go:89] found id: ""
	I0314 01:01:41.313161   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.313170   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:41.313175   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:41.313235   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:41.351266   66232 cri.go:89] found id: ""
	I0314 01:01:41.351296   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.351305   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:41.351313   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:41.351378   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:41.389765   66232 cri.go:89] found id: ""
	I0314 01:01:41.389796   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.389807   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:41.389816   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:41.389893   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:41.437503   66232 cri.go:89] found id: ""
	I0314 01:01:41.437527   66232 logs.go:276] 0 containers: []
	W0314 01:01:41.437537   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:41.437553   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:41.437568   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:41.451137   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:41.451170   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:41.554349   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:41.554376   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:41.554391   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:41.634670   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:41.634713   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:41.678576   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:41.678607   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:39.836520   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.837350   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:40.192708   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:42.193948   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:41.038661   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:43.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.237699   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:44.252678   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:44.252757   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:44.290393   66232 cri.go:89] found id: ""
	I0314 01:01:44.290420   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.290430   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:44.290438   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:44.290492   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:44.331394   66232 cri.go:89] found id: ""
	I0314 01:01:44.331426   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.331438   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:44.331446   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:44.331506   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:44.373654   66232 cri.go:89] found id: ""
	I0314 01:01:44.373686   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.373694   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:44.373702   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:44.373764   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:44.414168   66232 cri.go:89] found id: ""
	I0314 01:01:44.414198   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.414206   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:44.414212   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:44.414259   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:44.451158   66232 cri.go:89] found id: ""
	I0314 01:01:44.451183   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.451193   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:44.451201   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:44.451269   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:44.495410   66232 cri.go:89] found id: ""
	I0314 01:01:44.495436   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.495443   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:44.495450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:44.495509   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:44.539100   66232 cri.go:89] found id: ""
	I0314 01:01:44.539123   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.539129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:44.539136   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:44.539189   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:44.581428   66232 cri.go:89] found id: ""
	I0314 01:01:44.581451   66232 logs.go:276] 0 containers: []
	W0314 01:01:44.581463   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:44.581473   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:44.581491   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:44.657373   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:44.657393   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:44.657406   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.742163   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:44.742198   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:44.786447   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:44.786481   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:44.840479   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:44.840534   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.355369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:47.369427   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:47.369491   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:47.408529   66232 cri.go:89] found id: ""
	I0314 01:01:47.408559   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.408567   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:47.408574   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:47.408619   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:47.445164   66232 cri.go:89] found id: ""
	I0314 01:01:47.445192   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.445201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:47.445208   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:47.445255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:47.503333   66232 cri.go:89] found id: ""
	I0314 01:01:47.503367   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.503378   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:47.503385   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:47.503441   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:47.544289   66232 cri.go:89] found id: ""
	I0314 01:01:47.544313   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.544322   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:47.544329   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:47.544389   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:47.581686   66232 cri.go:89] found id: ""
	I0314 01:01:47.581707   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.581715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:47.581726   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:47.581773   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:47.620907   66232 cri.go:89] found id: ""
	I0314 01:01:47.620937   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.620948   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:47.620954   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:47.620999   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:47.655975   66232 cri.go:89] found id: ""
	I0314 01:01:47.656006   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.656018   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:47.656026   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:47.656088   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:47.694787   66232 cri.go:89] found id: ""
	I0314 01:01:47.694813   66232 logs.go:276] 0 containers: []
	W0314 01:01:47.694822   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:47.694832   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:47.694846   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:47.732722   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:47.732752   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:47.784521   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:47.784551   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:47.798074   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:47.798096   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:47.872951   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:47.872971   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:47.872984   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:44.336278   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.336942   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:44.693975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:47.194065   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:46.037997   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:48.038275   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.456896   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:50.472083   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:50.472159   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:50.510213   66232 cri.go:89] found id: ""
	I0314 01:01:50.510236   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.510244   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:50.510251   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:50.510308   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:50.551878   66232 cri.go:89] found id: ""
	I0314 01:01:50.551906   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.551915   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:50.551923   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:50.551983   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:50.599971   66232 cri.go:89] found id: ""
	I0314 01:01:50.599993   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.600000   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:50.600011   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:50.600068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:50.636105   66232 cri.go:89] found id: ""
	I0314 01:01:50.636135   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.636146   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:50.636154   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:50.636218   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:50.674154   66232 cri.go:89] found id: ""
	I0314 01:01:50.674188   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.674199   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:50.674207   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:50.674273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:50.711946   66232 cri.go:89] found id: ""
	I0314 01:01:50.711980   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.711992   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:50.711999   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:50.712048   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:50.750574   66232 cri.go:89] found id: ""
	I0314 01:01:50.750601   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.750612   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:50.750620   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:50.750679   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:50.788991   66232 cri.go:89] found id: ""
	I0314 01:01:50.789022   66232 logs.go:276] 0 containers: []
	W0314 01:01:50.789033   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:50.789045   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:50.789060   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:50.842491   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:50.842524   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:50.857759   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:50.857785   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:50.929715   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:50.929739   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:50.929754   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:51.008843   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:51.008883   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:48.835669   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.835802   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.335897   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:49.692834   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:52.191722   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:54.192101   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:50.543509   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.037040   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:53.554369   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:53.569045   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:53.569125   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:53.607571   66232 cri.go:89] found id: ""
	I0314 01:01:53.607602   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.607613   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:53.607621   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:53.607700   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:53.647998   66232 cri.go:89] found id: ""
	I0314 01:01:53.648027   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.648037   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:53.648044   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:53.648116   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:53.684825   66232 cri.go:89] found id: ""
	I0314 01:01:53.684855   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.684866   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:53.684873   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:53.684931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:53.722438   66232 cri.go:89] found id: ""
	I0314 01:01:53.722465   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.722476   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:53.722484   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:53.722543   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:53.761945   66232 cri.go:89] found id: ""
	I0314 01:01:53.761987   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.761999   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:53.762014   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:53.762075   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:53.799307   66232 cri.go:89] found id: ""
	I0314 01:01:53.799338   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.799349   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:53.799362   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:53.799420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:53.838685   66232 cri.go:89] found id: ""
	I0314 01:01:53.838713   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.838724   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:53.838731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:53.838810   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:53.884324   66232 cri.go:89] found id: ""
	I0314 01:01:53.884351   66232 logs.go:276] 0 containers: []
	W0314 01:01:53.884360   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:53.884370   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:53.884382   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:53.942495   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:53.942527   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:54.007790   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:54.007828   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:54.023348   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:54.023378   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:54.099122   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:54.099150   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:54.099165   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:56.679464   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:56.693691   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:56.693753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:56.731721   66232 cri.go:89] found id: ""
	I0314 01:01:56.731749   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.731756   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:56.731761   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:56.731811   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:56.766579   66232 cri.go:89] found id: ""
	I0314 01:01:56.766607   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.766614   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:56.766620   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:56.766675   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:56.807537   66232 cri.go:89] found id: ""
	I0314 01:01:56.807565   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.807574   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:56.807579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:56.807631   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:56.849077   66232 cri.go:89] found id: ""
	I0314 01:01:56.849100   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.849106   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:56.849112   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:56.849169   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:56.890982   66232 cri.go:89] found id: ""
	I0314 01:01:56.891003   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.891011   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:01:56.891016   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:01:56.891061   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:01:56.929769   66232 cri.go:89] found id: ""
	I0314 01:01:56.929790   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.929799   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:01:56.929805   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:01:56.929848   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:01:56.967319   66232 cri.go:89] found id: ""
	I0314 01:01:56.967346   66232 logs.go:276] 0 containers: []
	W0314 01:01:56.967356   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:01:56.967363   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:01:56.967421   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:01:57.004649   66232 cri.go:89] found id: ""
	I0314 01:01:57.004670   66232 logs.go:276] 0 containers: []
	W0314 01:01:57.004677   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:01:57.004685   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:01:57.004696   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:01:57.018578   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:01:57.018604   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:01:57.090826   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:01:57.090852   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:01:57.090868   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:01:57.170367   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:01:57.170398   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:01:57.216138   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:01:57.216179   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:01:55.835724   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:57.836100   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:56.192712   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.193199   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:55.538829   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:58.037589   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.038724   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:01:59.769685   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:01:59.786652   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:01:59.786713   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:01:59.869453   66232 cri.go:89] found id: ""
	I0314 01:01:59.869480   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.869491   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:01:59.869499   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:01:59.869568   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:01:59.915747   66232 cri.go:89] found id: ""
	I0314 01:01:59.915769   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.915777   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:01:59.915782   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:01:59.915840   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:01:59.951088   66232 cri.go:89] found id: ""
	I0314 01:01:59.951117   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.951127   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:01:59.951133   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:01:59.951197   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:01:59.986847   66232 cri.go:89] found id: ""
	I0314 01:01:59.986877   66232 logs.go:276] 0 containers: []
	W0314 01:01:59.986890   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:01:59.986898   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:01:59.986954   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:00.025390   66232 cri.go:89] found id: ""
	I0314 01:02:00.025420   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.025432   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:00.025440   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:00.025493   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:00.064174   66232 cri.go:89] found id: ""
	I0314 01:02:00.064206   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.064217   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:00.064226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:00.064286   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:00.102079   66232 cri.go:89] found id: ""
	I0314 01:02:00.102102   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.102112   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:00.102119   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:00.102179   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:00.138672   66232 cri.go:89] found id: ""
	I0314 01:02:00.138700   66232 logs.go:276] 0 containers: []
	W0314 01:02:00.138711   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:00.138721   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:00.138740   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:00.153516   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:00.153548   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:00.226585   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:00.226616   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:00.226631   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:00.307861   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:00.307898   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:00.353938   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:00.353966   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:02.909252   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:02.923483   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:02.923560   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:02.964379   66232 cri.go:89] found id: ""
	I0314 01:02:02.964408   66232 logs.go:276] 0 containers: []
	W0314 01:02:02.964419   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:02.964427   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:02.964486   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:03.001988   66232 cri.go:89] found id: ""
	I0314 01:02:03.002018   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.002028   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:03.002036   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:03.002106   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:03.043534   66232 cri.go:89] found id: ""
	I0314 01:02:03.043561   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.043572   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:03.043579   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:03.043637   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:03.083413   66232 cri.go:89] found id: ""
	I0314 01:02:03.083436   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.083444   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:03.083450   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:03.083504   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:01:59.837128   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.336485   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:00.692314   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.693186   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:02.039631   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.536890   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:03.117627   66232 cri.go:89] found id: ""
	I0314 01:02:03.117652   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.117664   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:03.117670   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:03.117718   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:03.151758   66232 cri.go:89] found id: ""
	I0314 01:02:03.151791   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.151802   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:03.151810   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:03.151861   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:03.192091   66232 cri.go:89] found id: ""
	I0314 01:02:03.192112   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.192118   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:03.192124   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:03.192178   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:03.235995   66232 cri.go:89] found id: ""
	I0314 01:02:03.236019   66232 logs.go:276] 0 containers: []
	W0314 01:02:03.236029   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:03.236039   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:03.236053   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:03.289431   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:03.289475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:03.305271   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:03.305325   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:03.383902   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:03.383922   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:03.383937   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:03.462882   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:03.462926   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.007991   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:06.023709   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:06.023768   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:06.063630   66232 cri.go:89] found id: ""
	I0314 01:02:06.063655   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.063662   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:06.063669   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:06.063727   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:06.103042   66232 cri.go:89] found id: ""
	I0314 01:02:06.103074   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.103083   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:06.103092   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:06.103149   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:06.139774   66232 cri.go:89] found id: ""
	I0314 01:02:06.139799   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.139810   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:06.139817   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:06.139874   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:06.176671   66232 cri.go:89] found id: ""
	I0314 01:02:06.176713   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.176724   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:06.176732   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:06.176798   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:06.216798   66232 cri.go:89] found id: ""
	I0314 01:02:06.216828   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.216840   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:06.216847   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:06.216903   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:06.256606   66232 cri.go:89] found id: ""
	I0314 01:02:06.256635   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.256645   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:06.256653   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:06.256712   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:06.295087   66232 cri.go:89] found id: ""
	I0314 01:02:06.295119   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.295129   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:06.295137   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:06.295198   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:06.329411   66232 cri.go:89] found id: ""
	I0314 01:02:06.329441   66232 logs.go:276] 0 containers: []
	W0314 01:02:06.329454   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:06.329464   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:06.329489   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:06.412363   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:06.412409   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:06.458902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:06.458932   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:06.510147   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:06.510182   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:06.526670   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:06.526695   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:06.604970   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:04.835705   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:07.335832   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:04.693230   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.694579   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.697716   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:06.538380   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:08.538547   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:09.106124   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:09.119646   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:09.119709   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:09.155771   66232 cri.go:89] found id: ""
	I0314 01:02:09.155804   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.155815   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:09.155824   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:09.155883   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:09.191683   66232 cri.go:89] found id: ""
	I0314 01:02:09.191722   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.191734   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:09.191742   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:09.191808   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:09.227010   66232 cri.go:89] found id: ""
	I0314 01:02:09.227033   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.227041   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:09.227050   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:09.227118   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:09.262820   66232 cri.go:89] found id: ""
	I0314 01:02:09.262850   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.262861   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:09.262869   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:09.262925   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:09.296057   66232 cri.go:89] found id: ""
	I0314 01:02:09.296092   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.296102   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:09.296109   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:09.296171   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:09.329589   66232 cri.go:89] found id: ""
	I0314 01:02:09.329615   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.329626   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:09.329634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:09.329685   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:09.374675   66232 cri.go:89] found id: ""
	I0314 01:02:09.374702   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.374710   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:09.374718   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:09.374785   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:09.412467   66232 cri.go:89] found id: ""
	I0314 01:02:09.412497   66232 logs.go:276] 0 containers: []
	W0314 01:02:09.412508   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:09.412518   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:09.412535   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:09.465354   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:09.465386   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:09.481823   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:09.481849   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:09.558431   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.558458   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:09.558475   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:09.641132   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:09.641171   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.190189   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:12.203783   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:12.203858   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:12.240189   66232 cri.go:89] found id: ""
	I0314 01:02:12.240219   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.240230   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:12.240238   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:12.240296   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:12.276307   66232 cri.go:89] found id: ""
	I0314 01:02:12.276336   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.276346   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:12.276354   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:12.276415   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:12.316916   66232 cri.go:89] found id: ""
	I0314 01:02:12.316949   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.316967   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:12.316975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:12.317036   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:12.356871   66232 cri.go:89] found id: ""
	I0314 01:02:12.356900   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.356910   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:12.356918   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:12.356981   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:12.391983   66232 cri.go:89] found id: ""
	I0314 01:02:12.392015   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.392026   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:12.392035   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:12.392105   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:12.428823   66232 cri.go:89] found id: ""
	I0314 01:02:12.428857   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.428868   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:12.428877   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:12.428938   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:12.466319   66232 cri.go:89] found id: ""
	I0314 01:02:12.466342   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.466349   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:12.466354   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:12.466413   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:12.502277   66232 cri.go:89] found id: ""
	I0314 01:02:12.502309   66232 logs.go:276] 0 containers: []
	W0314 01:02:12.502321   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:12.502333   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:12.502352   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:12.582309   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:12.582340   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:12.621333   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:12.621357   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:12.678396   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:12.678432   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:12.694371   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:12.694397   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:12.767592   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:09.337016   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.339617   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.192226   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.195180   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:11.037728   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:13.037824   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.038206   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.268149   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:15.281634   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:15.281707   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:15.316336   66232 cri.go:89] found id: ""
	I0314 01:02:15.316358   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.316366   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:15.316373   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:15.316437   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:15.356168   66232 cri.go:89] found id: ""
	I0314 01:02:15.356194   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.356201   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:15.356206   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:15.356257   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:15.394686   66232 cri.go:89] found id: ""
	I0314 01:02:15.394714   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.394726   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:15.394734   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:15.394813   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:15.433996   66232 cri.go:89] found id: ""
	I0314 01:02:15.434023   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.434034   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:15.434042   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:15.434103   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:15.479544   66232 cri.go:89] found id: ""
	I0314 01:02:15.479572   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.479583   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:15.479590   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:15.479659   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:15.514835   66232 cri.go:89] found id: ""
	I0314 01:02:15.514865   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.514875   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:15.514883   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:15.514942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:15.554980   66232 cri.go:89] found id: ""
	I0314 01:02:15.555011   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.555022   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:15.555030   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:15.555092   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:15.590130   66232 cri.go:89] found id: ""
	I0314 01:02:15.590167   66232 logs.go:276] 0 containers: []
	W0314 01:02:15.590178   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:15.590188   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:15.590203   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:15.658375   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:15.658394   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:15.658407   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:15.737774   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:15.737806   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:15.780480   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:15.780512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:15.832787   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:15.832830   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:13.834955   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.836544   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.836736   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:15.693510   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.193089   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:17.537729   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:19.540149   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:18.350032   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:18.364871   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:18.364931   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:18.406581   66232 cri.go:89] found id: ""
	I0314 01:02:18.406611   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.406620   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:18.406633   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:18.406696   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:18.446140   66232 cri.go:89] found id: ""
	I0314 01:02:18.446166   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.446176   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:18.446183   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:18.446242   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:18.492662   66232 cri.go:89] found id: ""
	I0314 01:02:18.492705   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.492713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:18.492719   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:18.492777   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:18.535933   66232 cri.go:89] found id: ""
	I0314 01:02:18.535961   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.535972   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:18.535980   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:18.536056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:18.574133   66232 cri.go:89] found id: ""
	I0314 01:02:18.574159   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.574167   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:18.574173   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:18.574227   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:18.612726   66232 cri.go:89] found id: ""
	I0314 01:02:18.612750   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.612757   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:18.612763   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:18.612815   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:18.653068   66232 cri.go:89] found id: ""
	I0314 01:02:18.653092   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.653099   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:18.653105   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:18.653148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:18.692840   66232 cri.go:89] found id: ""
	I0314 01:02:18.692880   66232 logs.go:276] 0 containers: []
	W0314 01:02:18.692890   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:18.692902   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:18.692915   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:18.748680   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:18.748717   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:18.764026   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:18.764054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:18.841767   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:18.841791   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:18.841805   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:18.923479   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:18.923512   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:21.467679   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:21.482326   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.482400   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.519603   66232 cri.go:89] found id: ""
	I0314 01:02:21.519627   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.519635   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:21.519641   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.519711   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.562301   66232 cri.go:89] found id: ""
	I0314 01:02:21.562325   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.562333   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:21.562338   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.562395   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:21.599503   66232 cri.go:89] found id: ""
	I0314 01:02:21.599531   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.599539   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:21.599545   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:21.599598   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:21.635347   66232 cri.go:89] found id: ""
	I0314 01:02:21.635378   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.635390   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:21.635397   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:21.635458   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:21.672622   66232 cri.go:89] found id: ""
	I0314 01:02:21.672648   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.672658   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:21.672667   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:21.672719   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:21.713177   66232 cri.go:89] found id: ""
	I0314 01:02:21.713201   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.713209   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:21.713217   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:21.713277   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:21.754273   66232 cri.go:89] found id: ""
	I0314 01:02:21.754312   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.754336   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:21.754350   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:21.754408   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:21.793782   66232 cri.go:89] found id: ""
	I0314 01:02:21.793832   66232 logs.go:276] 0 containers: []
	W0314 01:02:21.793852   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:21.793864   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:21.793886   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:21.877495   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:21.877521   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:21.877536   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:21.963446   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:21.963485   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.005250   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.005286   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.081328   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:22.081368   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.336150   65864 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:21.836598   65864 pod_ready.go:81] duration metric: took 4m0.008051794s for pod "metrics-server-57f55c9bc5-7pzll" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:21.836623   65864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:21.836633   65864 pod_ready.go:38] duration metric: took 4m4.551998385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:21.836650   65864 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:21.836684   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:21.836737   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:21.913367   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:21.913392   65864 cri.go:89] found id: ""
	I0314 01:02:21.913401   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:21.913461   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.920425   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:21.920491   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:21.968527   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:21.968560   65864 cri.go:89] found id: ""
	I0314 01:02:21.968578   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:21.968641   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:21.973938   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:21.974019   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:22.027214   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.027239   65864 cri.go:89] found id: ""
	I0314 01:02:22.027250   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:22.027301   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.033919   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:22.034007   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:22.085453   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.085477   65864 cri.go:89] found id: ""
	I0314 01:02:22.085486   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:22.085541   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.091651   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:22.091726   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:22.134083   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.134112   65864 cri.go:89] found id: ""
	I0314 01:02:22.134121   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:22.134179   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.139013   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:22.139089   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:22.176760   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.176785   65864 cri.go:89] found id: ""
	I0314 01:02:22.176795   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:22.176844   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.182497   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:22.182573   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:22.236966   65864 cri.go:89] found id: ""
	I0314 01:02:22.237000   65864 logs.go:276] 0 containers: []
	W0314 01:02:22.237010   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:22.237017   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:22.237078   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:22.289422   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.289448   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:22.289454   65864 cri.go:89] found id: ""
	I0314 01:02:22.289462   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:22.289526   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.295489   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:22.300166   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:22.300189   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:22.361740   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:22.361779   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:22.432402   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:22.432443   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:22.476348   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:22.476378   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:22.516881   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:22.516911   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:22.576864   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:22.576899   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:22.622739   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:22.622783   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:22.679757   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:22.679794   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:22.882084   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:22.882126   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:22.937962   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:22.937999   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:22.994180   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:22.994209   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:23.038730   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:23.038761   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:23.518422   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:23.518471   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:20.193555   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.194625   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:22.039562   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.043053   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:24.599757   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:24.615216   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:24.615273   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:24.654495   66232 cri.go:89] found id: ""
	I0314 01:02:24.654521   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.654529   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:24.654535   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:24.654581   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:24.691822   66232 cri.go:89] found id: ""
	I0314 01:02:24.691854   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.691864   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:24.691872   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:24.691927   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:24.734755   66232 cri.go:89] found id: ""
	I0314 01:02:24.734796   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.734806   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:24.734812   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:24.734864   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:24.770474   66232 cri.go:89] found id: ""
	I0314 01:02:24.770502   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.770513   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:24.770520   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:24.770564   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:24.807518   66232 cri.go:89] found id: ""
	I0314 01:02:24.807549   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.807562   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:24.807570   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:24.807636   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:24.844469   66232 cri.go:89] found id: ""
	I0314 01:02:24.844500   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.844513   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:24.844521   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:24.844585   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:24.882099   66232 cri.go:89] found id: ""
	I0314 01:02:24.882136   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.882147   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:24.882155   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:24.882215   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:24.922711   66232 cri.go:89] found id: ""
	I0314 01:02:24.922751   66232 logs.go:276] 0 containers: []
	W0314 01:02:24.922773   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:24.922787   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:24.922802   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:24.965349   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:24.965374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:25.021552   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:25.021585   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:25.039990   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:25.040027   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:25.116945   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:25.116967   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:25.116981   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.706427   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:27.722129   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:27.722193   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:27.762976   66232 cri.go:89] found id: ""
	I0314 01:02:27.763015   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.763023   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:27.763029   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:27.763077   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:27.803939   66232 cri.go:89] found id: ""
	I0314 01:02:27.803979   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.803990   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:27.803997   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:27.804068   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:27.844923   66232 cri.go:89] found id: ""
	I0314 01:02:27.844946   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.844953   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:27.844959   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:27.845015   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:27.882694   66232 cri.go:89] found id: ""
	I0314 01:02:27.882717   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.882725   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:27.882731   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:27.882801   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:27.922926   66232 cri.go:89] found id: ""
	I0314 01:02:27.922958   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.922968   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:27.922975   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:27.923035   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:27.960120   66232 cri.go:89] found id: ""
	I0314 01:02:27.960149   66232 logs.go:276] 0 containers: []
	W0314 01:02:27.960160   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:27.960168   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:27.960228   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:28.015021   66232 cri.go:89] found id: ""
	I0314 01:02:28.015047   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.015056   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:28.015062   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:28.015119   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:28.054923   66232 cri.go:89] found id: ""
	I0314 01:02:28.054946   66232 logs.go:276] 0 containers: []
	W0314 01:02:28.054952   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:28.054960   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:28.054972   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.038373   65864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:26.055483   65864 api_server.go:72] duration metric: took 4m14.013216316s to wait for apiserver process to appear ...
	I0314 01:02:26.055505   65864 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:26.055536   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:26.055585   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:26.108344   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:26.108363   65864 cri.go:89] found id: ""
	I0314 01:02:26.108370   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:26.108420   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.112806   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:26.112872   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:26.155399   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.155417   65864 cri.go:89] found id: ""
	I0314 01:02:26.155424   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:26.155468   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.159725   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:26.159780   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:26.201938   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.201960   65864 cri.go:89] found id: ""
	I0314 01:02:26.201968   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:26.202012   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.206751   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:26.206831   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:26.252327   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.252350   65864 cri.go:89] found id: ""
	I0314 01:02:26.252357   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:26.252405   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.257325   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:26.257387   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:26.297880   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.297901   65864 cri.go:89] found id: ""
	I0314 01:02:26.297910   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:26.297965   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.302607   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:26.302679   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:26.343104   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.343131   65864 cri.go:89] found id: ""
	I0314 01:02:26.343141   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:26.343207   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.347594   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:26.347652   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:26.390465   65864 cri.go:89] found id: ""
	I0314 01:02:26.390495   65864 logs.go:276] 0 containers: []
	W0314 01:02:26.390505   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:26.390517   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:26.390576   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:26.434540   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:26.434566   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.434572   65864 cri.go:89] found id: ""
	I0314 01:02:26.434582   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:26.434644   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.439794   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:26.445012   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:26.445036   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:26.488302   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:26.488331   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:26.526601   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:26.526630   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:26.578955   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:26.578989   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:26.633535   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:26.633573   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:26.764496   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:26.764533   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:26.822677   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:26.822713   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:26.866628   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:26.866653   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:26.909498   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:26.909524   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:26.965612   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:26.965646   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:27.004922   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:27.004974   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:27.422800   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:27.422844   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:27.441082   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:27.441113   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:24.693782   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:27.193414   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:26.537535   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.539922   66021 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:28.111690   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:28.111723   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:28.126158   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:28.126189   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:28.200521   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:28.200542   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:28.200554   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:28.279637   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:28.279672   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.824286   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:30.840707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.840787   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.888628   66232 cri.go:89] found id: ""
	I0314 01:02:30.888658   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.888669   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:30.888677   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.888758   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.934219   66232 cri.go:89] found id: ""
	I0314 01:02:30.934254   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.934264   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:30.934272   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.934332   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.979679   66232 cri.go:89] found id: ""
	I0314 01:02:30.979702   66232 logs.go:276] 0 containers: []
	W0314 01:02:30.979713   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:30.979721   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.979792   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:31.024045   66232 cri.go:89] found id: ""
	I0314 01:02:31.024074   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.024085   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:31.024093   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:31.024150   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:31.070153   66232 cri.go:89] found id: ""
	I0314 01:02:31.070185   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.070197   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:31.070204   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:31.070267   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:31.121943   66232 cri.go:89] found id: ""
	I0314 01:02:31.121972   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.121983   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:31.121992   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:31.122056   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:31.168934   66232 cri.go:89] found id: ""
	I0314 01:02:31.168951   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.168959   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:31.168965   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:31.169040   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:31.213885   66232 cri.go:89] found id: ""
	I0314 01:02:31.213917   66232 logs.go:276] 0 containers: []
	W0314 01:02:31.213929   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:31.213939   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.213958   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:31.304097   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:31.304127   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.304142   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.388525   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:31.388566   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:31.442920   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.442953   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.505932   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.505965   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:29.995508   65864 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0314 01:02:30.001049   65864 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0314 01:02:30.002172   65864 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:02:30.002194   65864 api_server.go:131] duration metric: took 3.946684299s to wait for apiserver health ...
	I0314 01:02:30.002201   65864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:30.002224   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.002268   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.043814   65864 cri.go:89] found id: "310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:30.043836   65864 cri.go:89] found id: ""
	I0314 01:02:30.043850   65864 logs.go:276] 1 containers: [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239]
	I0314 01:02:30.043904   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.048215   65864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.048287   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.085507   65864 cri.go:89] found id: "d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:30.085530   65864 cri.go:89] found id: ""
	I0314 01:02:30.085538   65864 logs.go:276] 1 containers: [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b]
	I0314 01:02:30.085587   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.089899   65864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.089958   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.129518   65864 cri.go:89] found id: "7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:30.129538   65864 cri.go:89] found id: ""
	I0314 01:02:30.129545   65864 logs.go:276] 1 containers: [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2]
	I0314 01:02:30.129588   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.134037   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.134121   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.178092   65864 cri.go:89] found id: "eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.178114   65864 cri.go:89] found id: ""
	I0314 01:02:30.178122   65864 logs.go:276] 1 containers: [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf]
	I0314 01:02:30.178174   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.184655   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.184712   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.223945   65864 cri.go:89] found id: "3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.223969   65864 cri.go:89] found id: ""
	I0314 01:02:30.223987   65864 logs.go:276] 1 containers: [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828]
	I0314 01:02:30.224051   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.228354   65864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.228410   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.265712   65864 cri.go:89] found id: "396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:30.265741   65864 cri.go:89] found id: ""
	I0314 01:02:30.265758   65864 logs.go:276] 1 containers: [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2]
	I0314 01:02:30.265814   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.270260   65864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.270312   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.320283   65864 cri.go:89] found id: ""
	I0314 01:02:30.320314   65864 logs.go:276] 0 containers: []
	W0314 01:02:30.320327   65864 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.320334   65864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.320385   65864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.360838   65864 cri.go:89] found id: "ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.360865   65864 cri.go:89] found id: "3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:30.360869   65864 cri.go:89] found id: ""
	I0314 01:02:30.360876   65864 logs.go:276] 2 containers: [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0]
	I0314 01:02:30.360919   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.366350   65864 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.370839   65864 logs.go:123] Gathering logs for kube-scheduler [eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf] ...
	I0314 01:02:30.370862   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eaf7cd9d2f3f8af0ecb7eb2ce34a652979ae0ccca8532952040e522def8e4faf"
	I0314 01:02:30.422403   65864 logs.go:123] Gathering logs for kube-proxy [3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828] ...
	I0314 01:02:30.422432   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c9a4136bfd32e117d9efa599256f3bf78bd556637a32d6c1705fbc95ba89828"
	I0314 01:02:30.461303   65864 logs.go:123] Gathering logs for storage-provisioner [ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861] ...
	I0314 01:02:30.461333   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8fd6893aa1d113a757345bc7b38bc562ac5f09f6ec1b0d3b89889d8c611861"
	I0314 01:02:30.500335   65864 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:30.500364   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:30.925694   65864 logs.go:123] Gathering logs for container status ...
	I0314 01:02:30.925740   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:30.977607   65864 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:30.977643   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:31.040726   65864 logs.go:123] Gathering logs for kube-apiserver [310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239] ...
	I0314 01:02:31.040758   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 310169fe474c450a84b70e14284c4322e9e2593612bdf59b92e59455abda1239"
	I0314 01:02:31.097774   65864 logs.go:123] Gathering logs for etcd [d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b] ...
	I0314 01:02:31.097811   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05f2a8d7b1aa824b438cca2541b99ce5cfcfc6e355dbfde56fb07fdf3fc201b"
	I0314 01:02:31.161995   65864 logs.go:123] Gathering logs for kube-controller-manager [396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2] ...
	I0314 01:02:31.162038   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 396e0c2ab791a703b5376efc8af521e41a3941aa851bb8bb132601123a12e0e2"
	I0314 01:02:31.229782   65864 logs.go:123] Gathering logs for storage-provisioner [3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0] ...
	I0314 01:02:31.229823   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d431baedcd8c4ba6a24b4b00a89211bdb10940a1ac00eb996acb9bdbd35e0a0"
	I0314 01:02:31.268715   65864 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:31.268742   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:31.288135   65864 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:31.288164   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.459345   65864 logs.go:123] Gathering logs for coredns [7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2] ...
	I0314 01:02:31.459375   65864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a233103631701918640ea55e61dfe4ac60e237d0a9b70c178ad3c1e0656a5a2"
	I0314 01:02:34.020556   65864 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:34.020589   65864 system_pods.go:61] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.020598   65864 system_pods.go:61] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.020607   65864 system_pods.go:61] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.020612   65864 system_pods.go:61] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.020616   65864 system_pods.go:61] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.020620   65864 system_pods.go:61] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.020628   65864 system_pods.go:61] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.020634   65864 system_pods.go:61] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.020644   65864 system_pods.go:74] duration metric: took 4.018436618s to wait for pod list to return data ...
	I0314 01:02:34.020653   65864 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:34.023473   65864 default_sa.go:45] found service account: "default"
	I0314 01:02:34.023496   65864 default_sa.go:55] duration metric: took 2.831779ms for default service account to be created ...
	I0314 01:02:34.023504   65864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:34.030011   65864 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:34.030060   65864 system_pods.go:89] "coredns-76f75df574-lptfk" [597ce2ed-6ab6-418e-9720-9ae9d275cb33] Running
	I0314 01:02:34.030068   65864 system_pods.go:89] "etcd-no-preload-585806" [3088a406-be51-4e68-bc9f-b4c569fa9f9a] Running
	I0314 01:02:34.030077   65864 system_pods.go:89] "kube-apiserver-no-preload-585806" [406a8970-0f1a-43e8-8aca-888c6a692b39] Running
	I0314 01:02:34.030083   65864 system_pods.go:89] "kube-controller-manager-no-preload-585806" [c5d6e95e-ade4-4010-805e-e63fc67be0f3] Running
	I0314 01:02:34.030092   65864 system_pods.go:89] "kube-proxy-wpdb9" [013df8e8-ce80-4cff-937a-16742369c561] Running
	I0314 01:02:34.030107   65864 system_pods.go:89] "kube-scheduler-no-preload-585806" [cb269187-33f1-4af0-9cf3-e156d5b44216] Running
	I0314 01:02:34.030124   65864 system_pods.go:89] "metrics-server-57f55c9bc5-7pzll" [84952403-8cff-4fa3-b7ef-d98ab0edf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:34.030131   65864 system_pods.go:89] "storage-provisioner" [113f608a-28d1-4365-9898-dd6f37150317] Running
	I0314 01:02:34.030143   65864 system_pods.go:126] duration metric: took 6.633594ms to wait for k8s-apps to be running ...
	I0314 01:02:34.030188   65864 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:34.030262   65864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:34.050932   65864 system_svc.go:56] duration metric: took 20.734837ms WaitForService to wait for kubelet
	I0314 01:02:34.050961   65864 kubeadm.go:576] duration metric: took 4m22.008698948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:34.050980   65864 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:34.055036   65864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:34.055068   65864 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:34.055083   65864 node_conditions.go:105] duration metric: took 4.097364ms to run NodePressure ...
	I0314 01:02:34.055105   65864 start.go:240] waiting for startup goroutines ...
	I0314 01:02:34.055118   65864 start.go:245] waiting for cluster config update ...
	I0314 01:02:34.055132   65864 start.go:254] writing updated cluster config ...
	I0314 01:02:34.055496   65864 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:34.113276   65864 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:02:34.115462   65864 out.go:177] * Done! kubectl is now configured to use "no-preload-585806" cluster and "default" namespace by default
	I0314 01:02:29.693041   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:32.194975   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:30.538234   66021 pod_ready.go:81] duration metric: took 4m0.007493671s for pod "metrics-server-57f55c9bc5-kll8v" in "kube-system" namespace to be "Ready" ...
	E0314 01:02:30.538259   66021 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:02:30.538266   66021 pod_ready.go:38] duration metric: took 4m4.916255619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:02:30.538278   66021 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:02:30.538307   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:30.538363   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:30.592811   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:30.592839   66021 cri.go:89] found id: ""
	I0314 01:02:30.592850   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:30.592911   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.598839   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:30.598908   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:30.642277   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:30.642301   66021 cri.go:89] found id: ""
	I0314 01:02:30.642310   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:30.642362   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.646745   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:30.646815   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:30.696518   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:30.696538   66021 cri.go:89] found id: ""
	I0314 01:02:30.696548   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:30.696601   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.701433   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:30.701496   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:30.741777   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:30.741805   66021 cri.go:89] found id: ""
	I0314 01:02:30.741815   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:30.741873   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.746610   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:30.746678   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:30.802714   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:30.802734   66021 cri.go:89] found id: ""
	I0314 01:02:30.802743   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:30.802905   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.807733   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:30.807800   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:30.857325   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:30.857348   66021 cri.go:89] found id: ""
	I0314 01:02:30.857357   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:30.857411   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.864272   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:30.864342   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:30.913206   66021 cri.go:89] found id: ""
	I0314 01:02:30.913233   66021 logs.go:276] 0 containers: []
	W0314 01:02:30.913240   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:30.913246   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:30.913306   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:30.962101   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:30.962140   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:30.962146   66021 cri.go:89] found id: ""
	I0314 01:02:30.962164   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:30.962225   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.968138   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:30.974297   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:30.974321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:31.169483   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:31.169515   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:31.231894   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:31.231933   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:31.292732   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:31.292784   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:31.340076   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:31.340116   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:31.405921   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:31.405964   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:31.456370   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:31.456398   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:31.504710   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:31.504736   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:31.989644   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:31.989675   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:32.048608   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:32.048641   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:32.063791   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:32.063820   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:32.104259   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:32.104285   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:32.143364   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:32.143388   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:34.704603   66021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.723060   66021 api_server.go:72] duration metric: took 4m16.82749669s to wait for apiserver process to appear ...
	I0314 01:02:34.723094   66021 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:02:34.723131   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.723195   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.763208   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:34.763235   66021 cri.go:89] found id: ""
	I0314 01:02:34.763245   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:34.763321   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.768746   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.768824   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.811836   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:34.811859   66021 cri.go:89] found id: ""
	I0314 01:02:34.811867   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:34.811921   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.816649   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.816714   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.857291   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.857312   66021 cri.go:89] found id: ""
	I0314 01:02:34.857319   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:34.857364   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.861988   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.862069   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.903495   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:34.903520   66021 cri.go:89] found id: ""
	I0314 01:02:34.903529   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:34.903589   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.908672   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.908728   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.954304   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:34.954327   66021 cri.go:89] found id: ""
	I0314 01:02:34.954335   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:34.954381   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:34.959231   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.959288   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:35.004076   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.004102   66021 cri.go:89] found id: ""
	I0314 01:02:35.004111   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:35.004164   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.009125   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:35.009193   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:35.049932   66021 cri.go:89] found id: ""
	I0314 01:02:35.049961   66021 logs.go:276] 0 containers: []
	W0314 01:02:35.049971   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:35.049979   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:35.050047   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:35.107527   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.107575   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.107582   66021 cri.go:89] found id: ""
	I0314 01:02:35.107591   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:35.107649   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.112355   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:35.116898   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:35.116925   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:34.021725   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:34.039342   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:34.039420   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:34.086740   66232 cri.go:89] found id: ""
	I0314 01:02:34.086775   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.086787   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:02:34.086803   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:34.086869   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:34.131404   66232 cri.go:89] found id: ""
	I0314 01:02:34.131432   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.131440   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:02:34.131445   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:34.131497   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:34.179153   66232 cri.go:89] found id: ""
	I0314 01:02:34.179182   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.179192   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:02:34.179199   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:34.179255   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:34.228867   66232 cri.go:89] found id: ""
	I0314 01:02:34.228892   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.228902   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:02:34.228908   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:34.228942   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:34.272680   66232 cri.go:89] found id: ""
	I0314 01:02:34.272705   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.272715   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:02:34.272722   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:34.272772   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:34.311626   66232 cri.go:89] found id: ""
	I0314 01:02:34.311672   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.311684   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:02:34.311692   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:34.311751   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:34.349977   66232 cri.go:89] found id: ""
	I0314 01:02:34.349998   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.350006   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:34.350012   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:02:34.350070   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:02:34.398456   66232 cri.go:89] found id: ""
	I0314 01:02:34.398481   66232 logs.go:276] 0 containers: []
	W0314 01:02:34.398491   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:02:34.398503   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:34.398515   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:34.472170   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:34.472208   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:34.498046   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:34.498076   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:02:34.574474   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:02:34.574496   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:34.574529   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:34.656398   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:02:34.656435   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:37.201236   66232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:02:37.216950   66232 kubeadm.go:591] duration metric: took 4m2.27726413s to restartPrimaryControlPlane
	W0314 01:02:37.217024   66232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 01:02:37.217054   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:02:34.693825   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:37.191981   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:39.193819   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:35.155896   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:35.155929   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:35.198893   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:35.198923   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:35.258044   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:35.258076   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:35.296826   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:35.296859   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:35.349583   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:35.349619   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:35.400768   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:35.400805   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:35.528320   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:35.528357   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:35.571141   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:35.571174   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:35.612630   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:35.612658   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:36.034287   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:36.034321   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:36.093027   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:36.093054   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:36.150546   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:36.150589   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:38.673291   66021 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0314 01:02:38.678087   66021 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0314 01:02:38.679655   66021 api_server.go:141] control plane version: v1.28.4
	I0314 01:02:38.679674   66021 api_server.go:131] duration metric: took 3.956573598s to wait for apiserver health ...
	I0314 01:02:38.679680   66021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:02:38.679700   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:02:38.679741   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:02:38.727884   66021 cri.go:89] found id: "a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:38.727908   66021 cri.go:89] found id: ""
	I0314 01:02:38.727918   66021 logs.go:276] 1 containers: [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b]
	I0314 01:02:38.727974   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.732935   66021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:02:38.733003   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:02:38.771359   66021 cri.go:89] found id: "2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:38.771387   66021 cri.go:89] found id: ""
	I0314 01:02:38.771397   66021 logs.go:276] 1 containers: [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8]
	I0314 01:02:38.771452   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.775888   66021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:02:38.775948   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:02:38.814905   66021 cri.go:89] found id: "e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:38.814934   66021 cri.go:89] found id: ""
	I0314 01:02:38.814944   66021 logs.go:276] 1 containers: [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82]
	I0314 01:02:38.815018   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.820018   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:02:38.820096   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:02:38.869174   66021 cri.go:89] found id: "46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:38.869200   66021 cri.go:89] found id: ""
	I0314 01:02:38.869210   66021 logs.go:276] 1 containers: [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef]
	I0314 01:02:38.869268   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.879998   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:02:38.880071   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:02:38.960143   66021 cri.go:89] found id: "08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:38.960187   66021 cri.go:89] found id: ""
	I0314 01:02:38.960198   66021 logs.go:276] 1 containers: [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0]
	I0314 01:02:38.960258   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:38.964872   66021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:02:38.964940   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:02:39.005104   66021 cri.go:89] found id: "fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.005126   66021 cri.go:89] found id: ""
	I0314 01:02:39.005134   66021 logs.go:276] 1 containers: [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648]
	I0314 01:02:39.005178   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.009751   66021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:02:39.009803   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:02:39.048232   66021 cri.go:89] found id: ""
	I0314 01:02:39.048263   66021 logs.go:276] 0 containers: []
	W0314 01:02:39.048274   66021 logs.go:278] No container was found matching "kindnet"
	I0314 01:02:39.048281   66021 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:02:39.048335   66021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:02:39.087548   66021 cri.go:89] found id: "051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.087568   66021 cri.go:89] found id: "5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.087572   66021 cri.go:89] found id: ""
	I0314 01:02:39.087579   66021 logs.go:276] 2 containers: [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3]
	I0314 01:02:39.087624   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.092379   66021 ssh_runner.go:195] Run: which crictl
	I0314 01:02:39.097599   66021 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:02:39.097621   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:02:39.236455   66021 logs.go:123] Gathering logs for kube-apiserver [a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b] ...
	I0314 01:02:39.236484   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ee2cfc6f4e7de3423846d8665f44a52de7736c4cd016d39f79ecdb9167979b"
	I0314 01:02:39.284275   66021 logs.go:123] Gathering logs for etcd [2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8] ...
	I0314 01:02:39.284300   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad67f56260114e34ff6a457c2337101a344b72e44a2a1735d769704b18978f8"
	I0314 01:02:39.341908   66021 logs.go:123] Gathering logs for coredns [e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82] ...
	I0314 01:02:39.341939   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e87ba9e92390ac87103f58affea795e5d212080639a488551a714ad2eed7bf82"
	I0314 01:02:39.384407   66021 logs.go:123] Gathering logs for kube-scheduler [46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef] ...
	I0314 01:02:39.384435   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46a128a58b6656c7631fc424c56a8806c9f358556c9956d319fabad2ae8530ef"
	I0314 01:02:39.445137   66021 logs.go:123] Gathering logs for kube-controller-manager [fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648] ...
	I0314 01:02:39.445167   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe628f4a1ccd1196e9ced6b0c8b0f41a80865c5ab64281f586c14a821b0c8648"
	I0314 01:02:39.501656   66021 logs.go:123] Gathering logs for kubelet ...
	I0314 01:02:39.501686   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:02:39.567627   66021 logs.go:123] Gathering logs for dmesg ...
	I0314 01:02:39.567661   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:02:39.584561   66021 logs.go:123] Gathering logs for storage-provisioner [051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252] ...
	I0314 01:02:39.584601   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051f66d3597a85a8011da73d46ccbf5db01953eec3e3e85cd70fca9ead87a252"
	I0314 01:02:39.626131   66021 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:02:39.626196   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:02:40.002525   66021 logs.go:123] Gathering logs for container status ...
	I0314 01:02:40.002572   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:02:40.058721   66021 logs.go:123] Gathering logs for kube-proxy [08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0] ...
	I0314 01:02:40.058753   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cdc002a4003d7f92470ca7341b5b328195d23c769e2fce216ba0b2fc7950f0"
	I0314 01:02:40.097905   66021 logs.go:123] Gathering logs for storage-provisioner [5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3] ...
	I0314 01:02:40.097941   66021 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5306eb697d68f8e5bad04bd5e10adac155ee3dbdf3a5ad407219b979e3f986b3"
	I0314 01:02:39.562661   66232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.345580159s)
	I0314 01:02:39.562733   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:39.579845   66232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 01:02:39.592242   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:02:39.603936   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:02:39.603962   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:02:39.604023   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:02:39.614854   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:02:39.614909   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:02:39.626602   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:02:39.637282   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:02:39.637334   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:02:39.650019   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.662020   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:02:39.662084   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:02:39.674740   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:02:39.685131   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:02:39.685190   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:02:39.696251   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:02:39.768972   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:02:39.769055   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:02:39.926950   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:02:39.927086   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:02:39.927239   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:02:40.161671   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:02:40.164039   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:02:40.164124   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:02:40.164219   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:02:40.164321   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:02:40.164411   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:02:40.164508   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:02:40.164595   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:02:40.164680   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:02:40.164762   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:02:40.164868   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:02:40.164982   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:02:40.165050   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:02:40.165123   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:02:40.264416   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:02:40.417229   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:02:40.489457   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:02:40.743517   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:02:40.759319   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:02:40.760643   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:02:40.760715   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:02:40.939953   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:02:42.643820   66021 system_pods.go:59] 8 kube-system pods found
	I0314 01:02:42.643846   66021 system_pods.go:61] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.643851   66021 system_pods.go:61] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.643854   66021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.643858   66021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.643861   66021 system_pods.go:61] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.643863   66021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.643869   66021 system_pods.go:61] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.643874   66021 system_pods.go:61] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.643881   66021 system_pods.go:74] duration metric: took 3.964195909s to wait for pod list to return data ...
	I0314 01:02:42.643888   66021 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:02:42.646461   66021 default_sa.go:45] found service account: "default"
	I0314 01:02:42.646481   66021 default_sa.go:55] duration metric: took 2.585464ms for default service account to be created ...
	I0314 01:02:42.646490   66021 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:02:42.651961   66021 system_pods.go:86] 8 kube-system pods found
	I0314 01:02:42.651983   66021 system_pods.go:89] "coredns-5dd5756b68-cc7x2" [48ab007b-5498-4883-84b9-f034c3095fc0] Running
	I0314 01:02:42.651989   66021 system_pods.go:89] "etcd-default-k8s-diff-port-652215" [eee2b7d2-26b2-4b6b-a7ea-b0f36c96ed95] Running
	I0314 01:02:42.651993   66021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-652215" [47666893-4425-4be4-8a2a-67a40c4ec92b] Running
	I0314 01:02:42.651998   66021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-652215" [182e6b77-ecd3-4f48-b132-db52370ace93] Running
	I0314 01:02:42.652002   66021 system_pods.go:89] "kube-proxy-s7dwp" [e793aa69-a2c7-4404-9b74-ed4ac39cb249] Running
	I0314 01:02:42.652006   66021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-652215" [4a6516c6-7cfa-41aa-87be-24ff371bb65f] Running
	I0314 01:02:42.652012   66021 system_pods.go:89] "metrics-server-57f55c9bc5-kll8v" [9060285f-ee6f-4d17-a7a6-a5a24f88d80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:02:42.652019   66021 system_pods.go:89] "storage-provisioner" [b70cb5c2-863b-45d4-9363-dd364a240118] Running
	I0314 01:02:42.652027   66021 system_pods.go:126] duration metric: took 5.530611ms to wait for k8s-apps to be running ...
	I0314 01:02:42.652037   66021 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:02:42.652078   66021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:02:42.669896   66021 system_svc.go:56] duration metric: took 17.851623ms WaitForService to wait for kubelet
	I0314 01:02:42.669930   66021 kubeadm.go:576] duration metric: took 4m24.774372903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:02:42.669965   66021 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:02:42.672766   66021 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:02:42.672789   66021 node_conditions.go:123] node cpu capacity is 2
	I0314 01:02:42.672802   66021 node_conditions.go:105] duration metric: took 2.830665ms to run NodePressure ...
	I0314 01:02:42.672813   66021 start.go:240] waiting for startup goroutines ...
	I0314 01:02:42.672819   66021 start.go:245] waiting for cluster config update ...
	I0314 01:02:42.672829   66021 start.go:254] writing updated cluster config ...
	I0314 01:02:42.673076   66021 ssh_runner.go:195] Run: rm -f paused
	I0314 01:02:42.721481   66021 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:02:42.723479   66021 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-652215" cluster and "default" namespace by default
	I0314 01:02:40.942001   66232 out.go:204]   - Booting up control plane ...
	I0314 01:02:40.942144   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:02:40.951012   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:02:40.952452   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:02:40.953336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:02:40.960365   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:02:41.692569   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:43.693995   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:46.193241   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:48.194371   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:50.692479   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:52.692654   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:55.192035   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:02:57.692909   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:00.193154   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:02.194296   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:04.196022   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:06.693006   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:09.192302   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:11.192955   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:13.692552   65557 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace has status "Ready":"False"
	I0314 01:03:15.192489   65557 pod_ready.go:81] duration metric: took 4m0.007020608s for pod "metrics-server-57f55c9bc5-bbz2d" in "kube-system" namespace to be "Ready" ...
	E0314 01:03:15.192527   65557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:03:15.192538   65557 pod_ready.go:38] duration metric: took 4m4.053934642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:03:15.192554   65557 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:03:15.192587   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:15.192647   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:15.256619   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:15.256643   65557 cri.go:89] found id: ""
	I0314 01:03:15.256653   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:15.256707   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.262251   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:15.262317   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:15.305577   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:15.305605   65557 cri.go:89] found id: ""
	I0314 01:03:15.305613   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:15.305676   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.311058   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:15.311136   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:15.350580   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:15.350605   65557 cri.go:89] found id: ""
	I0314 01:03:15.350615   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:15.350675   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.355574   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:15.355637   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:15.395248   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:15.395278   65557 cri.go:89] found id: ""
	I0314 01:03:15.395289   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:15.395345   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.400714   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:15.400789   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:15.446181   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:15.446207   65557 cri.go:89] found id: ""
	I0314 01:03:15.446217   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:15.446280   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.451142   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:15.451220   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:15.499079   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:15.499106   65557 cri.go:89] found id: ""
	I0314 01:03:15.499120   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:15.499178   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.504092   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:15.504158   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:15.546791   65557 cri.go:89] found id: ""
	I0314 01:03:15.546820   65557 logs.go:276] 0 containers: []
	W0314 01:03:15.546830   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:15.546838   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:15.546898   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:15.586249   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:15.586271   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:15.586275   65557 cri.go:89] found id: ""
	I0314 01:03:15.586282   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:15.586341   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.590680   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:15.595060   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:15.595086   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:16.112562   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:16.112623   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:16.172847   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:16.172882   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:16.333057   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:16.333098   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:16.386456   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:16.386490   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:16.444375   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:16.444402   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:16.486220   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:16.486260   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:16.526438   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:16.526470   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:16.576927   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:16.576958   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:16.592148   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:16.592174   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:16.648514   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:16.648545   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:16.695025   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:16.695051   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:16.746925   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:16.746955   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.285952   65557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:03:19.304257   65557 api_server.go:72] duration metric: took 4m15.904145845s to wait for apiserver process to appear ...
	I0314 01:03:19.304286   65557 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:03:19.304325   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:19.304387   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:20.960311   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:03:20.961416   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:20.961634   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:19.352722   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.352749   65557 cri.go:89] found id: ""
	I0314 01:03:19.352758   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:19.352813   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.358745   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:19.358840   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:19.398652   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:19.398677   65557 cri.go:89] found id: ""
	I0314 01:03:19.398687   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:19.398745   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.403737   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:19.403812   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:19.449705   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.449789   65557 cri.go:89] found id: ""
	I0314 01:03:19.449804   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:19.449875   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.454646   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:19.454703   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:19.497413   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.497437   65557 cri.go:89] found id: ""
	I0314 01:03:19.497446   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:19.497505   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.502314   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:19.502383   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:19.544651   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.544670   65557 cri.go:89] found id: ""
	I0314 01:03:19.544677   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:19.544734   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.549565   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:19.549627   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:19.588946   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:19.588964   65557 cri.go:89] found id: ""
	I0314 01:03:19.588971   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:19.589021   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.593896   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:19.593962   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:19.635716   65557 cri.go:89] found id: ""
	I0314 01:03:19.635742   65557 logs.go:276] 0 containers: []
	W0314 01:03:19.635753   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:19.635759   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:19.635815   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:19.677464   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.677489   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.677495   65557 cri.go:89] found id: ""
	I0314 01:03:19.677505   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:19.677565   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.682353   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:19.687167   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:19.687188   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:19.736953   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:19.736991   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:19.781476   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:19.781506   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:19.822236   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:19.822265   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:19.866289   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:19.866312   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:19.911787   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:19.911815   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:19.950065   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:19.950101   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:19.989521   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:19.989554   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:20.384831   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:20.384868   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:20.441338   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:20.441369   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:20.457686   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:20.457713   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:20.576908   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:20.576939   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:20.620339   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:20.620368   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.171840   65557 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0314 01:03:23.178026   65557 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0314 01:03:23.179553   65557 api_server.go:141] control plane version: v1.28.4
	I0314 01:03:23.179581   65557 api_server.go:131] duration metric: took 3.875286718s to wait for apiserver health ...
	I0314 01:03:23.179592   65557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:03:23.179620   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:03:23.179680   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:03:23.228503   65557 cri.go:89] found id: "bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.228523   65557 cri.go:89] found id: ""
	I0314 01:03:23.228530   65557 logs.go:276] 1 containers: [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a]
	I0314 01:03:23.228582   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.233166   65557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:03:23.233236   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:03:23.274079   65557 cri.go:89] found id: "24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.274110   65557 cri.go:89] found id: ""
	I0314 01:03:23.274120   65557 logs.go:276] 1 containers: [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d]
	I0314 01:03:23.274179   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.279453   65557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:03:23.279559   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:03:23.319821   65557 cri.go:89] found id: "a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.319844   65557 cri.go:89] found id: ""
	I0314 01:03:23.319854   65557 logs.go:276] 1 containers: [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127]
	I0314 01:03:23.319914   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.325134   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:03:23.325199   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:03:23.366475   65557 cri.go:89] found id: "066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.366496   65557 cri.go:89] found id: ""
	I0314 01:03:23.366503   65557 logs.go:276] 1 containers: [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684]
	I0314 01:03:23.366547   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.371660   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:03:23.371716   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:03:23.416034   65557 cri.go:89] found id: "1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:23.416060   65557 cri.go:89] found id: ""
	I0314 01:03:23.416069   65557 logs.go:276] 1 containers: [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd]
	I0314 01:03:23.416128   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.421256   65557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:03:23.421319   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:03:23.461772   65557 cri.go:89] found id: "dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:23.461792   65557 cri.go:89] found id: ""
	I0314 01:03:23.461799   65557 logs.go:276] 1 containers: [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642]
	I0314 01:03:23.461848   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.466581   65557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:03:23.466644   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:03:23.513583   65557 cri.go:89] found id: ""
	I0314 01:03:23.513610   65557 logs.go:276] 0 containers: []
	W0314 01:03:23.513626   65557 logs.go:278] No container was found matching "kindnet"
	I0314 01:03:23.513633   65557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:03:23.513693   65557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:03:23.554856   65557 cri.go:89] found id: "d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.554875   65557 cri.go:89] found id: "2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:23.554879   65557 cri.go:89] found id: ""
	I0314 01:03:23.554885   65557 logs.go:276] 2 containers: [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca]
	I0314 01:03:23.554932   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.559820   65557 ssh_runner.go:195] Run: which crictl
	I0314 01:03:23.564514   65557 logs.go:123] Gathering logs for kubelet ...
	I0314 01:03:23.564534   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:03:23.619210   65557 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:03:23.619246   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:03:23.750881   65557 logs.go:123] Gathering logs for kube-apiserver [bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a] ...
	I0314 01:03:23.750908   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb8fc976a142133dfb1ec8cc60b266c80637e5216bc0ea9a201798c3b6056a"
	I0314 01:03:23.800300   65557 logs.go:123] Gathering logs for etcd [24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d] ...
	I0314 01:03:23.800342   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24395f2c73e375fd52c6014d2ba8344d8d3e8247d556deca265838e34c44363d"
	I0314 01:03:23.849606   65557 logs.go:123] Gathering logs for coredns [a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127] ...
	I0314 01:03:23.849637   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c7aed18e08732b3136e83f8e2af033973ddcbe423628a1a82b8fd1f80c127"
	I0314 01:03:23.896168   65557 logs.go:123] Gathering logs for storage-provisioner [d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204] ...
	I0314 01:03:23.896194   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d987b830b81fbf67e9d58eefe1ddaf8e2656ce79830e08c2196783d3bf77c204"
	I0314 01:03:23.938976   65557 logs.go:123] Gathering logs for dmesg ...
	I0314 01:03:23.939008   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:03:23.955960   65557 logs.go:123] Gathering logs for kube-scheduler [066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684] ...
	I0314 01:03:23.955988   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 066a9f5381b0135659c95243ced5bc95df50fe045aa65daf0a362a8d8b0cd684"
	I0314 01:03:23.999961   65557 logs.go:123] Gathering logs for kube-proxy [1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd] ...
	I0314 01:03:23.999990   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a163fee30923d40aa11589b93c20c74f54c1b45bd6f7f84b9bcb8e17558eebd"
	I0314 01:03:24.044533   65557 logs.go:123] Gathering logs for kube-controller-manager [dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642] ...
	I0314 01:03:24.044562   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb700c9f2e3ba3c6041bd05057233cadce509eced7748075ee878cd64933642"
	I0314 01:03:24.097691   65557 logs.go:123] Gathering logs for storage-provisioner [2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca] ...
	I0314 01:03:24.097720   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e736f3d1ff7d79049fc2246c7f42ae342fd2fb2d5a8d194d9d8d175328337ca"
	I0314 01:03:24.137172   65557 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:03:24.137207   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:03:24.480724   65557 logs.go:123] Gathering logs for container status ...
	I0314 01:03:24.480767   65557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:03:27.042143   65557 system_pods.go:59] 8 kube-system pods found
	I0314 01:03:27.042177   65557 system_pods.go:61] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.042185   65557 system_pods.go:61] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.042191   65557 system_pods.go:61] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.042197   65557 system_pods.go:61] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.042201   65557 system_pods.go:61] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.042206   65557 system_pods.go:61] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.042213   65557 system_pods.go:61] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.042220   65557 system_pods.go:61] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.042231   65557 system_pods.go:74] duration metric: took 3.862631414s to wait for pod list to return data ...
	I0314 01:03:27.042241   65557 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:03:27.045464   65557 default_sa.go:45] found service account: "default"
	I0314 01:03:27.045542   65557 default_sa.go:55] duration metric: took 3.286713ms for default service account to be created ...
	I0314 01:03:27.045573   65557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:03:27.057164   65557 system_pods.go:86] 8 kube-system pods found
	I0314 01:03:27.057193   65557 system_pods.go:89] "coredns-5dd5756b68-r2dml" [d18370dd-193e-45c2-ab72-36f8155ac015] Running
	I0314 01:03:27.057199   65557 system_pods.go:89] "etcd-embed-certs-164135" [3d793df3-83bb-4cec-8efe-710d35e61a66] Running
	I0314 01:03:27.057204   65557 system_pods.go:89] "kube-apiserver-embed-certs-164135" [507551f1-6a46-4236-9028-f2f27fe276ef] Running
	I0314 01:03:27.057209   65557 system_pods.go:89] "kube-controller-manager-embed-certs-164135" [e48e0369-a37d-4d93-98bc-24913a5ce470] Running
	I0314 01:03:27.057213   65557 system_pods.go:89] "kube-proxy-wjz6d" [80b76a6d-0a4a-4e06-8e0a-7ac69d91a4ab] Running
	I0314 01:03:27.057217   65557 system_pods.go:89] "kube-scheduler-embed-certs-164135" [ff74851b-c1fa-460e-b926-64ffd65a0bc1] Running
	I0314 01:03:27.057224   65557 system_pods.go:89] "metrics-server-57f55c9bc5-bbz2d" [e6df7295-58bb-4ece-841f-f93afd3f9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:03:27.057236   65557 system_pods.go:89] "storage-provisioner" [ad3f5f56-5c62-4dc1-a4d3-4c04efb0500a] Running
	I0314 01:03:27.057243   65557 system_pods.go:126] duration metric: took 11.663667ms to wait for k8s-apps to be running ...
	I0314 01:03:27.057249   65557 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:03:27.057295   65557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:03:27.075469   65557 system_svc.go:56] duration metric: took 18.20927ms WaitForService to wait for kubelet
	I0314 01:03:27.075501   65557 kubeadm.go:576] duration metric: took 4m23.675393774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:03:27.075521   65557 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:03:27.079149   65557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 01:03:27.079177   65557 node_conditions.go:123] node cpu capacity is 2
	I0314 01:03:27.079191   65557 node_conditions.go:105] duration metric: took 3.664222ms to run NodePressure ...
	I0314 01:03:27.079204   65557 start.go:240] waiting for startup goroutines ...
	I0314 01:03:27.079214   65557 start.go:245] waiting for cluster config update ...
	I0314 01:03:27.079228   65557 start.go:254] writing updated cluster config ...
	I0314 01:03:27.079567   65557 ssh_runner.go:195] Run: rm -f paused
	I0314 01:03:27.128453   65557 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 01:03:27.131043   65557 out.go:177] * Done! kubectl is now configured to use "embed-certs-164135" cluster and "default" namespace by default
	I0314 01:03:25.961895   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:25.962127   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:35.962149   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:35.962352   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:03:55.963116   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:03:55.963372   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964528   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:04:35.964814   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:04:35.964841   66232 kubeadm.go:309] 
	I0314 01:04:35.964900   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:04:35.964961   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:04:35.964972   66232 kubeadm.go:309] 
	I0314 01:04:35.965026   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:04:35.965074   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:04:35.965219   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:04:35.965231   66232 kubeadm.go:309] 
	I0314 01:04:35.965372   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:04:35.965421   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:04:35.965476   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:04:35.965489   66232 kubeadm.go:309] 
	I0314 01:04:35.965638   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:04:35.965743   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:04:35.965753   66232 kubeadm.go:309] 
	I0314 01:04:35.965872   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:04:35.965991   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:04:35.966110   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:04:35.966220   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:04:35.966237   66232 kubeadm.go:309] 
	I0314 01:04:35.966903   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:04:35.967031   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:04:35.967165   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 01:04:35.967278   66232 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 01:04:35.967374   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 01:04:36.533381   66232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:04:36.550315   66232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 01:04:36.562559   66232 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 01:04:36.562582   66232 kubeadm.go:156] found existing configuration files:
	
	I0314 01:04:36.562646   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 01:04:36.573080   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 01:04:36.573148   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 01:04:36.583367   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 01:04:36.592837   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 01:04:36.592905   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 01:04:36.602671   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.611880   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 01:04:36.611923   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 01:04:36.621373   66232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 01:04:36.630200   66232 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 01:04:36.630250   66232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 01:04:36.639622   66232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 01:04:36.876475   66232 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 01:06:32.905531   66232 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 01:06:32.905658   66232 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 01:06:32.907378   66232 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 01:06:32.907462   66232 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 01:06:32.907597   66232 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 01:06:32.907758   66232 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 01:06:32.907878   66232 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 01:06:32.907969   66232 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 01:06:32.909826   66232 out.go:204]   - Generating certificates and keys ...
	I0314 01:06:32.909915   66232 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 01:06:32.909976   66232 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 01:06:32.910065   66232 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 01:06:32.910143   66232 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 01:06:32.910232   66232 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 01:06:32.910306   66232 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 01:06:32.910371   66232 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 01:06:32.910450   66232 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 01:06:32.910516   66232 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 01:06:32.910579   66232 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 01:06:32.910616   66232 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 01:06:32.910705   66232 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 01:06:32.910809   66232 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 01:06:32.910860   66232 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 01:06:32.910946   66232 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 01:06:32.911032   66232 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 01:06:32.911131   66232 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 01:06:32.911225   66232 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 01:06:32.911290   66232 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 01:06:32.911360   66232 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 01:06:32.912972   66232 out.go:204]   - Booting up control plane ...
	I0314 01:06:32.913087   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 01:06:32.913169   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 01:06:32.913260   66232 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 01:06:32.913336   66232 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 01:06:32.913475   66232 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 01:06:32.913555   66232 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 01:06:32.913645   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.913879   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.913979   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914216   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914294   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914461   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914521   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.914704   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.914827   66232 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 01:06:32.915063   66232 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 01:06:32.915076   66232 kubeadm.go:309] 
	I0314 01:06:32.915112   66232 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 01:06:32.915167   66232 kubeadm.go:309] 		timed out waiting for the condition
	I0314 01:06:32.915177   66232 kubeadm.go:309] 
	I0314 01:06:32.915230   66232 kubeadm.go:309] 	This error is likely caused by:
	I0314 01:06:32.915269   66232 kubeadm.go:309] 		- The kubelet is not running
	I0314 01:06:32.915353   66232 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 01:06:32.915360   66232 kubeadm.go:309] 
	I0314 01:06:32.915441   66232 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 01:06:32.915469   66232 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 01:06:32.915498   66232 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 01:06:32.915505   66232 kubeadm.go:309] 
	I0314 01:06:32.915613   66232 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 01:06:32.915700   66232 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 01:06:32.915712   66232 kubeadm.go:309] 
	I0314 01:06:32.915855   66232 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 01:06:32.915955   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 01:06:32.916023   66232 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 01:06:32.916088   66232 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 01:06:32.916154   66232 kubeadm.go:393] duration metric: took 7m58.036160375s to StartCluster
	I0314 01:06:32.916166   66232 kubeadm.go:309] 
	I0314 01:06:32.916226   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:06:32.916295   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:06:32.972336   66232 cri.go:89] found id: ""
	I0314 01:06:32.972364   66232 logs.go:276] 0 containers: []
	W0314 01:06:32.972371   66232 logs.go:278] No container was found matching "kube-apiserver"
	I0314 01:06:32.972380   66232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 01:06:32.972434   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:06:33.023008   66232 cri.go:89] found id: ""
	I0314 01:06:33.023039   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.023050   66232 logs.go:278] No container was found matching "etcd"
	I0314 01:06:33.023057   66232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 01:06:33.023130   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:06:33.061974   66232 cri.go:89] found id: ""
	I0314 01:06:33.062002   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.062011   66232 logs.go:278] No container was found matching "coredns"
	I0314 01:06:33.062017   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:06:33.062085   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:06:33.101221   66232 cri.go:89] found id: ""
	I0314 01:06:33.101252   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.101264   66232 logs.go:278] No container was found matching "kube-scheduler"
	I0314 01:06:33.101271   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:06:33.101330   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:06:33.139665   66232 cri.go:89] found id: ""
	I0314 01:06:33.139689   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.139697   66232 logs.go:278] No container was found matching "kube-proxy"
	I0314 01:06:33.139707   66232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:06:33.139753   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:06:33.186493   66232 cri.go:89] found id: ""
	I0314 01:06:33.186519   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.186530   66232 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 01:06:33.186538   66232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 01:06:33.186610   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:06:33.236042   66232 cri.go:89] found id: ""
	I0314 01:06:33.236071   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.236083   66232 logs.go:278] No container was found matching "kindnet"
	I0314 01:06:33.236091   66232 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:06:33.236148   66232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:06:33.279285   66232 cri.go:89] found id: ""
	I0314 01:06:33.279316   66232 logs.go:276] 0 containers: []
	W0314 01:06:33.279326   66232 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 01:06:33.279338   66232 logs.go:123] Gathering logs for kubelet ...
	I0314 01:06:33.279361   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:06:33.331702   66232 logs.go:123] Gathering logs for dmesg ...
	I0314 01:06:33.331734   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:06:33.347222   66232 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:06:33.347249   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 01:06:33.437201   66232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 01:06:33.437225   66232 logs.go:123] Gathering logs for CRI-O ...
	I0314 01:06:33.437240   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 01:06:33.550099   66232 logs.go:123] Gathering logs for container status ...
	I0314 01:06:33.550135   66232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 01:06:33.596794   66232 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 01:06:33.596833   66232 out.go:239] * 
	W0314 01:06:33.596906   66232 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.596927   66232 out.go:239] * 
	W0314 01:06:33.597713   66232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:06:33.601567   66232 out.go:177] 
	W0314 01:06:33.602661   66232 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 01:06:33.602704   66232 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 01:06:33.602722   66232 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 01:06:33.604223   66232 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.332815090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379034332788911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e053102f-ea00-4887-b39c-b41098cf25da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.333311889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67f03e1f-11cf-49a6-8908-87a6bdd88f68 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.333398286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67f03e1f-11cf-49a6-8908-87a6bdd88f68 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.333434167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67f03e1f-11cf-49a6-8908-87a6bdd88f68 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.367005397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d8b1d0c-6cf3-431a-a745-417c0354d5dc name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.367138189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d8b1d0c-6cf3-431a-a745-417c0354d5dc name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.368740688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21d6729a-7036-4011-93a3-82f489488e9e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.369250028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379034369228507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21d6729a-7036-4011-93a3-82f489488e9e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.370186932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49bf7520-a5c7-4018-9b0d-fb0560ac4756 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.370265713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49bf7520-a5c7-4018-9b0d-fb0560ac4756 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.370302501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=49bf7520-a5c7-4018-9b0d-fb0560ac4756 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.404123576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97bdc3e3-d98f-4021-a73b-6137c240868a name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.404261217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97bdc3e3-d98f-4021-a73b-6137c240868a name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.405588823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=244a61c3-f1a6-4218-81cb-7d1975a12355 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.406003128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379034405981011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=244a61c3-f1a6-4218-81cb-7d1975a12355 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.406618383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d893b0c9-82c6-4440-85a6-5a659022cfa0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.406668848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d893b0c9-82c6-4440-85a6-5a659022cfa0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.406703058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d893b0c9-82c6-4440-85a6-5a659022cfa0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.442500803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=070d867d-621f-4d51-bf82-56ca95e41004 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.442643422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=070d867d-621f-4d51-bf82-56ca95e41004 name=/runtime.v1.RuntimeService/Version
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.444104806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f404a516-39eb-41b7-b7ea-3aa5ccd50669 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.444475838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710379034444453768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f404a516-39eb-41b7-b7ea-3aa5ccd50669 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.444953658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19a275be-3c2e-4323-a404-100d7349b94d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.445009552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19a275be-3c2e-4323-a404-100d7349b94d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 01:17:14 old-k8s-version-004791 crio[647]: time="2024-03-14 01:17:14.445127437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19a275be-3c2e-4323-a404-100d7349b94d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 00:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052991] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.890210] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.079753] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.316199] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062984] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075521] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.214616] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.150711] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.294146] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.930331] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.061685] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.999458] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +8.247240] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 01:02] systemd-fstab-generator[4935]: Ignoring "noauto" option for root device
	[Mar14 01:04] systemd-fstab-generator[5216]: Ignoring "noauto" option for root device
	[  +0.077634] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:17:14 up 19 min,  0 users,  load average: 0.02, 0.03, 0.05
	Linux old-k8s-version-004791 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: net/http.(*Transport).dial(0xc000918140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00095c5a0, 0x24, 0x0, 0x2f30000051c, 0x28e, ...)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: net/http.(*Transport).dialConn(0xc000918140, 0x4f7fe00, 0xc000052030, 0x0, 0xc000a26300, 0x5, 0xc00095c5a0, 0x24, 0x0, 0xc0007ca480, ...)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: net/http.(*Transport).dialConnFor(0xc000918140, 0xc00075b970)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: created by net/http.(*Transport).queueForDial
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: goroutine 159 [select]:
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000a1c500, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00014df80, 0x0, 0x0)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001a2c40)
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6635]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 14 01:17:12 old-k8s-version-004791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 131.
	Mar 14 01:17:12 old-k8s-version-004791 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 01:17:12 old-k8s-version-004791 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6653]: I0314 01:17:12.956158    6653 server.go:416] Version: v1.20.0
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6653]: I0314 01:17:12.956595    6653 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6653]: I0314 01:17:12.959218    6653 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6653]: I0314 01:17:12.960323    6653 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 14 01:17:12 old-k8s-version-004791 kubelet[6653]: W0314 01:17:12.960373    6653 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 2 (249.586771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-004791" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (95.34s)

                                                
                                    

Test pass (249/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.49
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 14.64
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 19.7
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 1.39
31 TestOffline 101.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 153.73
38 TestAddons/parallel/Registry 18.21
40 TestAddons/parallel/InspektorGadget 12.28
41 TestAddons/parallel/MetricsServer 6.98
42 TestAddons/parallel/HelmTiller 12.63
44 TestAddons/parallel/CSI 108.99
45 TestAddons/parallel/Headlamp 25.54
46 TestAddons/parallel/CloudSpanner 5.62
47 TestAddons/parallel/LocalPath 57.25
48 TestAddons/parallel/NvidiaDevicePlugin 5.56
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
54 TestCertOptions 60.96
55 TestCertExpiration 285.06
57 TestForceSystemdFlag 63.02
58 TestForceSystemdEnv 71.07
60 TestKVMDriverInstallOrUpdate 4.56
64 TestErrorSpam/setup 47.04
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.66
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 5.74
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 96.77
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 35.45
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.71
81 TestFunctional/serial/CacheCmd/cache/add_local 2.17
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 285.16
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.23
92 TestFunctional/serial/LogsFileCmd 1.26
93 TestFunctional/serial/InvalidService 4.18
95 TestFunctional/parallel/ConfigCmd 0.41
96 TestFunctional/parallel/DashboardCmd 13.39
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 0.82
103 TestFunctional/parallel/ServiceCmdConnect 8.9
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 52.88
107 TestFunctional/parallel/SSHCmd 0.43
108 TestFunctional/parallel/CpCmd 1.45
109 TestFunctional/parallel/MySQL 25.89
110 TestFunctional/parallel/FileSync 0.27
111 TestFunctional/parallel/CertSync 1.38
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.49
120 TestFunctional/parallel/Version/short 0.25
121 TestFunctional/parallel/Version/components 0.75
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
127 TestFunctional/parallel/ImageCommands/ImageListJson 1.79
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
130 TestFunctional/parallel/ImageCommands/Setup 1.97
131 TestFunctional/parallel/ServiceCmd/DeployApp 25.33
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 13.07
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.33
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.31
144 TestFunctional/parallel/ServiceCmd/List 0.53
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
147 TestFunctional/parallel/ServiceCmd/Format 0.42
148 TestFunctional/parallel/ServiceCmd/URL 0.35
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.42
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
151 TestFunctional/parallel/ProfileCmd/profile_list 0.37
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
154 TestFunctional/parallel/MountCmd/any-port 8.72
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.77
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.09
157 TestFunctional/parallel/MountCmd/specific-port 1.76
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMutliControlPlane/serial/StartCluster 317.34
166 TestMutliControlPlane/serial/DeployApp 12.79
167 TestMutliControlPlane/serial/PingHostFromPods 1.38
168 TestMutliControlPlane/serial/AddWorkerNode 48.48
169 TestMutliControlPlane/serial/NodeLabels 0.07
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMutliControlPlane/serial/CopyFile 13.79
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
175 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
178 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
180 TestMutliControlPlane/serial/RestartCluster 334.72
181 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.42
182 TestMutliControlPlane/serial/AddSecondaryNode 73.3
183 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
187 TestJSONOutput/start/Command 98.99
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.75
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.45
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 90.9
219 TestMountStart/serial/StartWithMountFirst 28.69
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 28.59
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.73
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 1.34
226 TestMountStart/serial/RestartStopped 24.7
227 TestMountStart/serial/VerifyMountPostStop 0.4
230 TestMultiNode/serial/FreshStart2Nodes 107.2
231 TestMultiNode/serial/DeployApp2Nodes 5.4
232 TestMultiNode/serial/PingHostFrom2Pods 1
233 TestMultiNode/serial/AddNode 42.93
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.79
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 33.28
240 TestMultiNode/serial/DeleteNode 2.45
242 TestMultiNode/serial/RestartMultiNode 170.19
243 TestMultiNode/serial/ValidateNameConflict 47.44
250 TestScheduledStopUnix 116.37
254 TestRunningBinaryUpgrade 194.25
258 TestStoppedBinaryUpgrade/Setup 2.31
262 TestStoppedBinaryUpgrade/Upgrade 198.36
267 TestNetworkPlugins/group/false 3.47
272 TestPause/serial/Start 127.19
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
283 TestNoKubernetes/serial/StartWithK8s 72.26
285 TestNoKubernetes/serial/StartWithStopK8s 6.53
286 TestNoKubernetes/serial/Start 29.03
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
288 TestNoKubernetes/serial/ProfileList 1.08
289 TestNoKubernetes/serial/Stop 1.72
290 TestNoKubernetes/serial/StartNoArgs 39.68
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
292 TestNetworkPlugins/group/auto/Start 75.31
293 TestNetworkPlugins/group/kindnet/Start 95.24
294 TestNetworkPlugins/group/auto/KubeletFlags 0.24
295 TestNetworkPlugins/group/auto/NetCatPod 11.28
296 TestNetworkPlugins/group/calico/Start 92.05
297 TestNetworkPlugins/group/auto/DNS 16.12
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
300 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
301 TestNetworkPlugins/group/auto/Localhost 0.16
302 TestNetworkPlugins/group/auto/HairPin 0.13
303 TestNetworkPlugins/group/kindnet/DNS 0.21
304 TestNetworkPlugins/group/kindnet/Localhost 0.17
305 TestNetworkPlugins/group/kindnet/HairPin 0.16
306 TestNetworkPlugins/group/custom-flannel/Start 81.99
307 TestNetworkPlugins/group/enable-default-cni/Start 135.09
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.32
310 TestNetworkPlugins/group/calico/NetCatPod 11.41
311 TestNetworkPlugins/group/calico/DNS 0.19
312 TestNetworkPlugins/group/calico/Localhost 0.17
313 TestNetworkPlugins/group/calico/HairPin 0.16
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
316 TestNetworkPlugins/group/flannel/Start 84.87
317 TestNetworkPlugins/group/bridge/Start 127.97
318 TestNetworkPlugins/group/custom-flannel/DNS 0.17
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.33
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestStartStop/group/no-preload/serial/FirstStart 146.39
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
332 TestNetworkPlugins/group/flannel/NetCatPod 11.32
333 TestNetworkPlugins/group/flannel/DNS 0.17
334 TestNetworkPlugins/group/flannel/Localhost 0.15
335 TestNetworkPlugins/group/flannel/HairPin 0.15
337 TestStartStop/group/embed-certs/serial/FirstStart 69
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
339 TestNetworkPlugins/group/bridge/NetCatPod 11.34
340 TestNetworkPlugins/group/bridge/DNS 0.15
341 TestNetworkPlugins/group/bridge/Localhost 0.14
342 TestNetworkPlugins/group/bridge/HairPin 0.15
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.11
345 TestStartStop/group/embed-certs/serial/DeployApp 12.32
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
348 TestStartStop/group/no-preload/serial/DeployApp 9.3
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
357 TestStartStop/group/embed-certs/serial/SecondStart 643.15
359 TestStartStop/group/no-preload/serial/SecondStart 545.87
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 532.94
362 TestStartStop/group/old-k8s-version/serial/Stop 6.32
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
374 TestStartStop/group/newest-cni/serial/FirstStart 56.32
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
377 TestStartStop/group/newest-cni/serial/Stop 10.39
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/newest-cni/serial/SecondStart 37.37
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/newest-cni/serial/Pause 2.47
x
+
TestDownloadOnly/v1.20.0/json-events (21.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-628793 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-628793 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (21.491851594s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-628793
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-628793: exit status 85 (72.718331ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |          |
	|         | -p download-only-628793        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:26:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:26:12.031294   12280 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:26:12.031546   12280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:12.031556   12280 out.go:304] Setting ErrFile to fd 2...
	I0313 23:26:12.031560   12280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:12.031753   12280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	W0313 23:26:12.031873   12280 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18375-4912/.minikube/config/config.json: open /home/jenkins/minikube-integration/18375-4912/.minikube/config/config.json: no such file or directory
	I0313 23:26:12.032411   12280 out.go:298] Setting JSON to true
	I0313 23:26:12.033226   12280 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":515,"bootTime":1710371857,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:26:12.033288   12280 start.go:139] virtualization: kvm guest
	I0313 23:26:12.035948   12280 out.go:97] [download-only-628793] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:26:12.037677   12280 out.go:169] MINIKUBE_LOCATION=18375
	W0313 23:26:12.036077   12280 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball: no such file or directory
	I0313 23:26:12.036124   12280 notify.go:220] Checking for updates...
	I0313 23:26:12.040406   12280 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:26:12.041890   12280 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:26:12.043639   12280 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:26:12.045093   12280 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:26:12.047758   12280 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:26:12.047985   12280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:26:12.149814   12280 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:26:12.149849   12280 start.go:297] selected driver: kvm2
	I0313 23:26:12.149855   12280 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:26:12.150199   12280 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:12.150334   12280 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:26:12.165164   12280 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:26:12.165230   12280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:26:12.165731   12280 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:26:12.165880   12280 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:26:12.165939   12280 cni.go:84] Creating CNI manager for ""
	I0313 23:26:12.165953   12280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:26:12.165960   12280 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:26:12.166015   12280 start.go:340] cluster config:
	{Name:download-only-628793 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-628793 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:26:12.166181   12280 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:12.168124   12280 out.go:97] Downloading VM boot image ...
	I0313 23:26:12.168167   12280 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:26:20.316291   12280 out.go:97] Starting "download-only-628793" primary control-plane node in "download-only-628793" cluster
	I0313 23:26:20.316309   12280 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0313 23:26:20.414588   12280 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0313 23:26:20.414629   12280 cache.go:56] Caching tarball of preloaded images
	I0313 23:26:20.414803   12280 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0313 23:26:20.416940   12280 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0313 23:26:20.416964   12280 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0313 23:26:20.515637   12280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0313 23:26:31.767330   12280 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0313 23:26:31.767415   12280 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-628793 host does not exist
	  To start a cluster, run: "minikube start -p download-only-628793"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-628793
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-690080 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-690080 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.644459969s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-690080
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-690080: exit status 85 (78.158756ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-628793        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| delete  | -p download-only-628793        | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| start   | -o=json --download-only        | download-only-690080 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-690080        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:26:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:26:33.865387   12471 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:26:33.865487   12471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:33.865491   12471 out.go:304] Setting ErrFile to fd 2...
	I0313 23:26:33.865498   12471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:33.865688   12471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:26:33.866236   12471 out.go:298] Setting JSON to true
	I0313 23:26:33.867078   12471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":537,"bootTime":1710371857,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:26:33.867143   12471 start.go:139] virtualization: kvm guest
	I0313 23:26:33.869478   12471 out.go:97] [download-only-690080] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:26:33.871181   12471 out.go:169] MINIKUBE_LOCATION=18375
	I0313 23:26:33.869615   12471 notify.go:220] Checking for updates...
	I0313 23:26:33.873986   12471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:26:33.875851   12471 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:26:33.877219   12471 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:26:33.878792   12471 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:26:33.881822   12471 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:26:33.882038   12471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:26:33.913943   12471 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:26:33.913986   12471 start.go:297] selected driver: kvm2
	I0313 23:26:33.913993   12471 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:26:33.914330   12471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:33.914416   12471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:26:33.929130   12471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:26:33.929197   12471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:26:33.929642   12471 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:26:33.929771   12471 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:26:33.929800   12471 cni.go:84] Creating CNI manager for ""
	I0313 23:26:33.929810   12471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:26:33.929819   12471 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:26:33.929872   12471 start.go:340] cluster config:
	{Name:download-only-690080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-690080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:26:33.929960   12471 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:33.931745   12471 out.go:97] Starting "download-only-690080" primary control-plane node in "download-only-690080" cluster
	I0313 23:26:33.931761   12471 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:26:34.029855   12471 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0313 23:26:34.029888   12471 cache.go:56] Caching tarball of preloaded images
	I0313 23:26:34.030056   12471 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0313 23:26:34.032104   12471 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0313 23:26:34.032132   12471 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0313 23:26:34.129463   12471 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-690080 host does not exist
	  To start a cluster, run: "minikube start -p download-only-690080"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-690080
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (19.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-312826 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-312826 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (19.698344529s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (19.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-312826
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-312826: exit status 85 (75.850666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-628793           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| delete  | -p download-only-628793           | download-only-628793 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| start   | -o=json --download-only           | download-only-690080 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-690080           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| delete  | -p download-only-690080           | download-only-690080 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC | 13 Mar 24 23:26 UTC |
	| start   | -o=json --download-only           | download-only-312826 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-312826           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:26:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:26:48.870253   12646 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:26:48.870507   12646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:48.870517   12646 out.go:304] Setting ErrFile to fd 2...
	I0313 23:26:48.870521   12646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:48.870751   12646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:26:48.871309   12646 out.go:298] Setting JSON to true
	I0313 23:26:48.872079   12646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":552,"bootTime":1710371857,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:26:48.872137   12646 start.go:139] virtualization: kvm guest
	I0313 23:26:48.874351   12646 out.go:97] [download-only-312826] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:26:48.876174   12646 out.go:169] MINIKUBE_LOCATION=18375
	I0313 23:26:48.874514   12646 notify.go:220] Checking for updates...
	I0313 23:26:48.878995   12646 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:26:48.880705   12646 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:26:48.882235   12646 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:26:48.883676   12646 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:26:48.885987   12646 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:26:48.886225   12646 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:26:48.918459   12646 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:26:48.918495   12646 start.go:297] selected driver: kvm2
	I0313 23:26:48.918502   12646 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:26:48.918842   12646 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:48.918920   12646 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:26:48.933680   12646 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:26:48.933725   12646 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:26:48.934184   12646 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:26:48.934365   12646 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:26:48.934418   12646 cni.go:84] Creating CNI manager for ""
	I0313 23:26:48.934432   12646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0313 23:26:48.934440   12646 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:26:48.934486   12646 start.go:340] cluster config:
	{Name:download-only-312826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-312826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:26:48.934583   12646 iso.go:125] acquiring lock: {Name:mke91ca51c8ff7dca6e70f37053e800141f67cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:48.936485   12646 out.go:97] Starting "download-only-312826" primary control-plane node in "download-only-312826" cluster
	I0313 23:26:48.936504   12646 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0313 23:26:49.032809   12646 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0313 23:26:49.032843   12646 cache.go:56] Caching tarball of preloaded images
	I0313 23:26:49.032987   12646 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0313 23:26:49.035105   12646 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0313 23:26:49.035127   12646 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0313 23:26:49.134899   12646 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18375-4912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-312826 host does not exist
	  To start a cluster, run: "minikube start -p download-only-312826"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-312826
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (1.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-474267 --alsologtostderr --binary-mirror http://127.0.0.1:40817 --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:314: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-474267 --alsologtostderr --binary-mirror http://127.0.0.1:40817 --driver=kvm2  --container-runtime=crio: (1.119449414s)
helpers_test.go:175: Cleaning up "binary-mirror-474267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-474267
--- PASS: TestBinaryMirror (1.39s)

                                                
                                    
x
+
TestOffline (101.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-820136 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-820136 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.454952485s)
helpers_test.go:175: Cleaning up "offline-crio-820136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-820136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-820136: (1.915453775s)
--- PASS: TestOffline (101.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-524943
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-524943: exit status 85 (61.579643ms)

                                                
                                                
-- stdout --
	* Profile "addons-524943" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-524943"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-524943
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-524943: exit status 85 (63.508578ms)

                                                
                                                
-- stdout --
	* Profile "addons-524943" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-524943"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (153.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-524943 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-524943 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.73055321s)
--- PASS: TestAddons/Setup (153.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 30.767662ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-slzzm" [1bb6324b-9959-47c3-94b5-7217cd8ac6ee] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007858567s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x4zxx" [07ee3fab-197e-40bc-9c11-42d8c9f9ab20] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005844788s
addons_test.go:340: (dbg) Run:  kubectl --context addons-524943 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-524943 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-524943 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.901679697s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 ip
2024/03/13 23:30:01 [DEBUG] GET http://192.168.39.37:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 addons disable registry --alsologtostderr -v=1: (1.091624443s)
--- PASS: TestAddons/parallel/Registry (18.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rh4fn" [f3e6428f-9d55-4362-a54e-0decabd0ee26] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004735098s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-524943
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-524943: (6.278038387s)
--- PASS: TestAddons/parallel/InspektorGadget (12.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 30.964708ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-q6mlw" [64934ba6-025a-4498-a9a0-16c88811d1e7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005970075s
addons_test.go:415: (dbg) Run:  kubectl --context addons-524943 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 31.210555ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-rmstt" [bf228b28-7f98-4e4b-ba99-5547d3ad59eb] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.008258463s
addons_test.go:473: (dbg) Run:  kubectl --context addons-524943 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-524943 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.84549186s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (108.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.952483ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-524943 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-524943 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ba785bdc-653c-42d2-9c28-176224b9afd1] Pending
helpers_test.go:344: "task-pv-pod" [ba785bdc-653c-42d2-9c28-176224b9afd1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ba785bdc-653c-42d2-9c28-176224b9afd1] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.008981612s
addons_test.go:584: (dbg) Run:  kubectl --context addons-524943 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-524943 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-524943 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-524943 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-524943 delete pod task-pv-pod: (1.244685731s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-524943 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-524943 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-524943 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7b4230ff-5eeb-43bc-8bc9-0b1da6c5374b] Pending
helpers_test.go:344: "task-pv-pod-restore" [7b4230ff-5eeb-43bc-8bc9-0b1da6c5374b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7b4230ff-5eeb-43bc-8bc9-0b1da6c5374b] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003708188s
addons_test.go:626: (dbg) Run:  kubectl --context addons-524943 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-524943 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-524943 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.872829692s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (108.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-524943 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-524943 --alsologtostderr -v=1: (1.536674181s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-cl9dl" [a4001a0c-3b65-4899-acc4-883f7c9ca10a] Pending
helpers_test.go:344: "headlamp-5485c556b-cl9dl" [a4001a0c-3b65-4899-acc4-883f7c9ca10a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-cl9dl" [a4001a0c-3b65-4899-acc4-883f7c9ca10a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 24.003731013s
--- PASS: TestAddons/parallel/Headlamp (25.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-6wgwb" [2ed1420d-8f4e-48f1-b103-5e3431e34847] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003956519s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-524943
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-524943 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-524943 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b308529d-cb9f-4005-b5fd-be348a1aa44b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b308529d-cb9f-4005-b5fd-be348a1aa44b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b308529d-cb9f-4005-b5fd-be348a1aa44b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00517109s
addons_test.go:891: (dbg) Run:  kubectl --context addons-524943 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 ssh "cat /opt/local-path-provisioner/pvc-73906157-3eeb-4425-bdc1-b9ef4702f661_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-524943 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-524943 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-524943 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-524943 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.377270085s)
--- PASS: TestAddons/parallel/LocalPath (57.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gfg8n" [b18807a5-a89a-4b4a-bce8-2cf7ba25d3c2] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005955156s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-524943
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-skjc9" [e124fdac-4a7a-4cd3-8f6e-144e97cb825e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004376778s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-524943 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-524943 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (60.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-853890 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-853890 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (58.939709678s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-853890 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-853890 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-853890 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-853890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-853890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-853890: (1.51325817s)
--- PASS: TestCertOptions (60.96s)

                                                
                                    
x
+
TestCertExpiration (285.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-577166 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-577166 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.453025024s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-577166 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-577166 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.556163365s)
helpers_test.go:175: Cleaning up "cert-expiration-577166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-577166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-577166: (1.05268194s)
--- PASS: TestCertExpiration (285.06s)

                                                
                                    
x
+
TestForceSystemdFlag (63.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-058213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0314 00:38:19.382094   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:38:36.336094   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-058213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.255905716s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-058213 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-058213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-058213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-058213: (1.539212794s)
--- PASS: TestForceSystemdFlag (63.02s)

                                                
                                    
x
+
TestForceSystemdEnv (71.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-233196 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-233196 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.992219015s)
helpers_test.go:175: Cleaning up "force-systemd-env-233196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-233196
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-233196: (1.073616825s)
--- PASS: TestForceSystemdEnv (71.07s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                    
x
+
TestErrorSpam/setup (47.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-299072 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-299072 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-299072 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-299072 --driver=kvm2  --container-runtime=crio: (47.037657112s)
--- PASS: TestErrorSpam/setup (47.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop: (2.304782739s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop: (1.665669078s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-299072 --log_dir /tmp/nospam-299072 stop: (1.772117825s)
--- PASS: TestErrorSpam/stop (5.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18375-4912/.minikube/files/etc/test/nested/copy/12268/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-112122 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m36.771566782s)
--- PASS: TestFunctional/serial/StartWithProxy (96.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-112122 --alsologtostderr -v=8: (35.452747062s)
functional_test.go:659: soft start took 35.453381917s for "functional-112122" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-112122 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:3.1: (1.205078429s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:3.3: (1.288043605s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 cache add registry.k8s.io/pause:latest: (1.211594711s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-112122 /tmp/TestFunctionalserialCacheCmdcacheadd_local1219898366/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache add minikube-local-cache-test:functional-112122
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 cache add minikube-local-cache-test:functional-112122: (1.772717326s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache delete minikube-local-cache-test:functional-112122
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-112122
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (227.712315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 kubectl -- --context functional-112122 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-112122 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (285.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0313 23:39:44.448878   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.454639   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.464902   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.485192   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.525542   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.605944   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:44.766401   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:45.086967   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:45.727886   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:47.008382   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:49.569205   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:39:54.689978   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:40:04.930952   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:40:25.411430   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:41:06.372710   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:42:28.296396   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-112122 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m45.161477062s)
functional_test.go:757: restart took 4m45.161627195s for "functional-112122" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (285.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-112122 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 logs: (1.228503145s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 logs --file /tmp/TestFunctionalserialLogsFileCmd1875339236/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 logs --file /tmp/TestFunctionalserialLogsFileCmd1875339236/001/logs.txt: (1.263228792s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-112122 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-112122
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-112122: exit status 115 (295.520103ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.224:30281 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-112122 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 config get cpus: exit status 14 (74.153001ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 config get cpus: exit status 14 (64.092695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-112122 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-112122 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21678: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-112122 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.256725ms)

                                                
                                                
-- stdout --
	* [functional-112122] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:44:09.944639   21326 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:44:09.944870   21326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:09.944878   21326 out.go:304] Setting ErrFile to fd 2...
	I0313 23:44:09.944882   21326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:09.945071   21326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:44:09.945619   21326 out.go:298] Setting JSON to false
	I0313 23:44:09.946515   21326 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1593,"bootTime":1710371857,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:44:09.946579   21326 start.go:139] virtualization: kvm guest
	I0313 23:44:09.948584   21326 out.go:177] * [functional-112122] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:44:09.950176   21326 notify.go:220] Checking for updates...
	I0313 23:44:09.950183   21326 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:44:09.951479   21326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:44:09.952710   21326 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:44:09.954012   21326 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:09.955122   21326 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:44:09.956294   21326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:44:09.957804   21326 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:44:09.958283   21326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:09.958324   21326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:09.973014   21326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0313 23:44:09.973419   21326 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:09.973990   21326 main.go:141] libmachine: Using API Version  1
	I0313 23:44:09.974016   21326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:09.974350   21326 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:09.974497   21326 main.go:141] libmachine: (functional-112122) Calling .DriverName
	I0313 23:44:09.974810   21326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:44:09.975081   21326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:09.975128   21326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:09.990115   21326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0313 23:44:09.990558   21326 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:09.991009   21326 main.go:141] libmachine: Using API Version  1
	I0313 23:44:09.991031   21326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:09.991323   21326 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:09.991510   21326 main.go:141] libmachine: (functional-112122) Calling .DriverName
	I0313 23:44:10.023887   21326 out.go:177] * Using the kvm2 driver based on existing profile
	I0313 23:44:10.025259   21326 start.go:297] selected driver: kvm2
	I0313 23:44:10.025270   21326 start.go:901] validating driver "kvm2" against &{Name:functional-112122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-112122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:44:10.025370   21326 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:44:10.027292   21326 out.go:177] 
	W0313 23:44:10.028549   21326 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0313 23:44:10.029882   21326 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-112122 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-112122 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.376846ms)

                                                
                                                
-- stdout --
	* [functional-112122] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:44:10.235479   21382 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:44:10.235599   21382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:10.235608   21382 out.go:304] Setting ErrFile to fd 2...
	I0313 23:44:10.235613   21382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:44:10.235875   21382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0313 23:44:10.236493   21382 out.go:298] Setting JSON to false
	I0313 23:44:10.237560   21382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1594,"bootTime":1710371857,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:44:10.237645   21382 start.go:139] virtualization: kvm guest
	I0313 23:44:10.240178   21382 out.go:177] * [functional-112122] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0313 23:44:10.241471   21382 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:44:10.242621   21382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:44:10.241531   21382 notify.go:220] Checking for updates...
	I0313 23:44:10.244778   21382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0313 23:44:10.245923   21382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0313 23:44:10.247016   21382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:44:10.248023   21382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:44:10.249380   21382 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0313 23:44:10.249733   21382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:10.249770   21382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:10.264661   21382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0313 23:44:10.265018   21382 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:10.265572   21382 main.go:141] libmachine: Using API Version  1
	I0313 23:44:10.265598   21382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:10.265950   21382 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:10.266134   21382 main.go:141] libmachine: (functional-112122) Calling .DriverName
	I0313 23:44:10.266372   21382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:44:10.266648   21382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0313 23:44:10.266682   21382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:44:10.280895   21382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0313 23:44:10.281256   21382 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:44:10.281692   21382 main.go:141] libmachine: Using API Version  1
	I0313 23:44:10.281720   21382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:44:10.281993   21382 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:44:10.282195   21382 main.go:141] libmachine: (functional-112122) Calling .DriverName
	I0313 23:44:10.312785   21382 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0313 23:44:10.314189   21382 start.go:297] selected driver: kvm2
	I0313 23:44:10.314204   21382 start.go:901] validating driver "kvm2" against &{Name:functional-112122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-112122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:44:10.314347   21382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:44:10.316598   21382 out.go:177] 
	W0313 23:44:10.317768   21382 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0313 23:44:10.319171   21382 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-112122 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-112122 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-x8s4h" [9601e52e-9bd5-4dd0-a1dc-f0a1195745d7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-x8s4h" [9601e52e-9bd5-4dd0-a1dc-f0a1195745d7] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004197753s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.224:32334
functional_test.go:1671: http://192.168.39.224:32334: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-x8s4h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.224:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.224:32334
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7e75ab18-f661-497e-95d1-171c1751705a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01082841s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-112122 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-112122 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-112122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-112122 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-112122 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [349c4744-725d-4b1a-9c0a-093c8e8f4816] Pending
helpers_test.go:344: "sp-pod" [349c4744-725d-4b1a-9c0a-093c8e8f4816] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [349c4744-725d-4b1a-9c0a-093c8e8f4816] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.005657258s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-112122 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-112122 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-112122 delete -f testdata/storage-provisioner/pod.yaml: (1.666110676s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-112122 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8c43d182-80ae-4e12-8c5d-a9df39899397] Pending
helpers_test.go:344: "sp-pod" [8c43d182-80ae-4e12-8c5d-a9df39899397] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8c43d182-80ae-4e12-8c5d-a9df39899397] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005205102s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-112122 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh -n functional-112122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cp functional-112122:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2652983164/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh -n functional-112122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh -n functional-112122 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-112122 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-lh2fm" [e0c597ed-013c-4733-92a9-be2b85ac4bca] Pending
helpers_test.go:344: "mysql-859648c796-lh2fm" [e0c597ed-013c-4733-92a9-be2b85ac4bca] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-lh2fm" [e0c597ed-013c-4733-92a9-be2b85ac4bca] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.017209198s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-112122 exec mysql-859648c796-lh2fm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-112122 exec mysql-859648c796-lh2fm -- mysql -ppassword -e "show databases;": exit status 1 (375.683921ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-112122 exec mysql-859648c796-lh2fm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-112122 exec mysql-859648c796-lh2fm -- mysql -ppassword -e "show databases;": exit status 1 (598.229299ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-112122 exec mysql-859648c796-lh2fm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12268/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /etc/test/nested/copy/12268/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12268.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /etc/ssl/certs/12268.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12268.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /usr/share/ca-certificates/12268.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/122682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /etc/ssl/certs/122682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/122682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /usr/share/ca-certificates/122682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-112122 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "sudo systemctl is-active docker": exit status 1 (251.810182ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "sudo systemctl is-active containerd": exit status 1 (221.75665ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 version --short
--- PASS: TestFunctional/parallel/Version/short (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-112122 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-112122
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-112122
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-112122 image ls --format short --alsologtostderr:
I0313 23:44:12.414008   21620 out.go:291] Setting OutFile to fd 1 ...
I0313 23:44:12.414112   21620 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:12.414117   21620 out.go:304] Setting ErrFile to fd 2...
I0313 23:44:12.414120   21620 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:12.414313   21620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
I0313 23:44:12.414918   21620 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:12.415006   21620 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:12.415386   21620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:12.415421   21620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:12.430477   21620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
I0313 23:44:12.430948   21620 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:12.431543   21620 main.go:141] libmachine: Using API Version  1
I0313 23:44:12.431572   21620 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:12.431954   21620 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:12.432239   21620 main.go:141] libmachine: (functional-112122) Calling .GetState
I0313 23:44:12.434245   21620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:12.434287   21620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:12.449705   21620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
I0313 23:44:12.450178   21620 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:12.450820   21620 main.go:141] libmachine: Using API Version  1
I0313 23:44:12.450850   21620 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:12.451310   21620 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:12.451544   21620 main.go:141] libmachine: (functional-112122) Calling .DriverName
I0313 23:44:12.451798   21620 ssh_runner.go:195] Run: systemctl --version
I0313 23:44:12.451833   21620 main.go:141] libmachine: (functional-112122) Calling .GetSSHHostname
I0313 23:44:12.455196   21620 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:12.455580   21620 main.go:141] libmachine: (functional-112122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:68:bc", ip: ""} in network mk-functional-112122: {Iface:virbr1 ExpiryTime:2024-03-14 00:36:37 +0000 UTC Type:0 Mac:52:54:00:f0:68:bc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-112122 Clientid:01:52:54:00:f0:68:bc}
I0313 23:44:12.455615   21620 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined IP address 192.168.39.224 and MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:12.455727   21620 main.go:141] libmachine: (functional-112122) Calling .GetSSHPort
I0313 23:44:12.455937   21620 main.go:141] libmachine: (functional-112122) Calling .GetSSHKeyPath
I0313 23:44:12.456132   21620 main.go:141] libmachine: (functional-112122) Calling .GetSSHUsername
I0313 23:44:12.456323   21620 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/functional-112122/id_rsa Username:docker}
I0313 23:44:12.540369   21620 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:44:12.626505   21620 main.go:141] libmachine: Making call to close driver server
I0313 23:44:12.626522   21620 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:12.626875   21620 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:12.626917   21620 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:12.626929   21620 main.go:141] libmachine: Making call to close driver server
I0313 23:44:12.626941   21620 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:12.627174   21620 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:12.627198   21620 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:12.627218   21620 main.go:141] libmachine: (functional-112122) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-112122 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-112122  | 542df8c74b9fd | 3.35kB |
| localhost/my-image                      | functional-112122  | 3a5f0d9de052d | 1.47MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/google-containers/addon-resizer  | functional-112122  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-112122 image ls --format table --alsologtostderr:
I0313 23:44:20.375313   22241 out.go:291] Setting OutFile to fd 1 ...
I0313 23:44:20.375445   22241 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:20.375457   22241 out.go:304] Setting ErrFile to fd 2...
I0313 23:44:20.375462   22241 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:20.375762   22241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
I0313 23:44:20.376399   22241 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:20.376493   22241 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:20.376895   22241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:20.376937   22241 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:20.392772   22241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
I0313 23:44:20.393246   22241 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:20.393806   22241 main.go:141] libmachine: Using API Version  1
I0313 23:44:20.393830   22241 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:20.394309   22241 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:20.394526   22241 main.go:141] libmachine: (functional-112122) Calling .GetState
I0313 23:44:20.396745   22241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:20.396798   22241 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:20.414452   22241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42819
I0313 23:44:20.414996   22241 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:20.415530   22241 main.go:141] libmachine: Using API Version  1
I0313 23:44:20.415555   22241 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:20.415923   22241 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:20.416104   22241 main.go:141] libmachine: (functional-112122) Calling .DriverName
I0313 23:44:20.416351   22241 ssh_runner.go:195] Run: systemctl --version
I0313 23:44:20.416377   22241 main.go:141] libmachine: (functional-112122) Calling .GetSSHHostname
I0313 23:44:20.419917   22241 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:20.420339   22241 main.go:141] libmachine: (functional-112122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:68:bc", ip: ""} in network mk-functional-112122: {Iface:virbr1 ExpiryTime:2024-03-14 00:36:37 +0000 UTC Type:0 Mac:52:54:00:f0:68:bc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-112122 Clientid:01:52:54:00:f0:68:bc}
I0313 23:44:20.420384   22241 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined IP address 192.168.39.224 and MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:20.420595   22241 main.go:141] libmachine: (functional-112122) Calling .GetSSHPort
I0313 23:44:20.420821   22241 main.go:141] libmachine: (functional-112122) Calling .GetSSHKeyPath
I0313 23:44:20.420991   22241 main.go:141] libmachine: (functional-112122) Calling .GetSSHUsername
I0313 23:44:20.421147   22241 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/functional-112122/id_rsa Username:docker}
I0313 23:44:20.506018   22241 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:44:20.589731   22241 main.go:141] libmachine: Making call to close driver server
I0313 23:44:20.589748   22241 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:20.590086   22241 main.go:141] libmachine: (functional-112122) DBG | Closing plugin on server side
I0313 23:44:20.590149   22241 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:20.590161   22241 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:20.590171   22241 main.go:141] libmachine: Making call to close driver server
I0313 23:44:20.590178   22241 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:20.590446   22241 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:20.590498   22241 main.go:141] libmachine: Making call to close connection to plugin binary
2024/03/13 23:44:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image ls --format json --alsologtostderr: (1.793568398s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-112122 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa
5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"542df8c74b9fd4fd7be76ab073414fbe43255ddb32e2cac7074d66bb413bc3cd","repoDigests":["localhost/minikube-local-cache-test@sha256:67bf7e713045740f2f383df1519f4ff226d3fe2621557644e58eeff540ec586d"],"repoTags":["localhost/minikube-local-cache-test:functional-112122"],"size":"3345"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e4
9502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-112122"],"size":"34114467"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7f
cab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["regist
ry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"afb1bfc7104a3b3d23d75dcaf75a1b6bf9de0b853f7a9875d68b89485bac7f5b","repoDigests":["docker.io/library/4f2737c32faeed140a0cb2e8cc1dcd64bcef98c522a05cd34f
8a0b0510695758-tmp@sha256:cd69a244fb946ba0f2c872049395c32f25719a8c6199e1a6f50e67b4ac1fbbfd"],"repoTags":[],"size":"1466018"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b4
73a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"3a5f0d9de052d144447b1e921364131396b52995699a527a5d43d34d3d118187","repoDigests":["localhost/my-image@sha256:f40898cbc1acf6ed4972220e9b40895024873be79af9370bf4523d6dd4e65442"],"repoTags":["localhost/my-image:functional-112122"],"size":"1468600"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-112122 image ls --format json --alsologtostderr:
I0313 23:44:18.677112   22219 out.go:291] Setting OutFile to fd 1 ...
I0313 23:44:18.677362   22219 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:18.677372   22219 out.go:304] Setting ErrFile to fd 2...
I0313 23:44:18.677376   22219 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:18.677613   22219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
I0313 23:44:18.678212   22219 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:18.678323   22219 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:18.678707   22219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:18.678743   22219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:18.693811   22219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
I0313 23:44:18.694314   22219 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:18.694988   22219 main.go:141] libmachine: Using API Version  1
I0313 23:44:18.695020   22219 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:18.695329   22219 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:18.695508   22219 main.go:141] libmachine: (functional-112122) Calling .GetState
I0313 23:44:18.697538   22219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:18.697585   22219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:18.713125   22219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
I0313 23:44:18.713957   22219 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:18.714515   22219 main.go:141] libmachine: Using API Version  1
I0313 23:44:18.714548   22219 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:18.714933   22219 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:18.715155   22219 main.go:141] libmachine: (functional-112122) Calling .DriverName
I0313 23:44:18.715399   22219 ssh_runner.go:195] Run: systemctl --version
I0313 23:44:18.715425   22219 main.go:141] libmachine: (functional-112122) Calling .GetSSHHostname
I0313 23:44:18.718327   22219 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:18.718730   22219 main.go:141] libmachine: (functional-112122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:68:bc", ip: ""} in network mk-functional-112122: {Iface:virbr1 ExpiryTime:2024-03-14 00:36:37 +0000 UTC Type:0 Mac:52:54:00:f0:68:bc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-112122 Clientid:01:52:54:00:f0:68:bc}
I0313 23:44:18.718757   22219 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined IP address 192.168.39.224 and MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:18.718927   22219 main.go:141] libmachine: (functional-112122) Calling .GetSSHPort
I0313 23:44:18.719107   22219 main.go:141] libmachine: (functional-112122) Calling .GetSSHKeyPath
I0313 23:44:18.719255   22219 main.go:141] libmachine: (functional-112122) Calling .GetSSHUsername
I0313 23:44:18.719369   22219 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/functional-112122/id_rsa Username:docker}
I0313 23:44:18.858241   22219 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:44:20.406150   22219 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.547874537s)
I0313 23:44:20.406657   22219 main.go:141] libmachine: Making call to close driver server
I0313 23:44:20.406687   22219 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:20.407041   22219 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:20.407059   22219 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:20.407069   22219 main.go:141] libmachine: Making call to close driver server
I0313 23:44:20.407070   22219 main.go:141] libmachine: (functional-112122) DBG | Closing plugin on server side
I0313 23:44:20.407077   22219 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:20.407305   22219 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:20.407316   22219 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-112122 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-112122
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 542df8c74b9fd4fd7be76ab073414fbe43255ddb32e2cac7074d66bb413bc3cd
repoDigests:
- localhost/minikube-local-cache-test@sha256:67bf7e713045740f2f383df1519f4ff226d3fe2621557644e58eeff540ec586d
repoTags:
- localhost/minikube-local-cache-test:functional-112122
size: "3345"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-112122 image ls --format yaml --alsologtostderr:
I0313 23:44:12.697060   21653 out.go:291] Setting OutFile to fd 1 ...
I0313 23:44:12.697284   21653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:12.697552   21653 out.go:304] Setting ErrFile to fd 2...
I0313 23:44:12.697572   21653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:44:12.698062   21653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
I0313 23:44:12.698818   21653 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:12.698932   21653 config.go:182] Loaded profile config "functional-112122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0313 23:44:12.699312   21653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:12.699357   21653 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:12.714532   21653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
I0313 23:44:12.715079   21653 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:12.715680   21653 main.go:141] libmachine: Using API Version  1
I0313 23:44:12.715710   21653 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:12.716048   21653 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:12.716207   21653 main.go:141] libmachine: (functional-112122) Calling .GetState
I0313 23:44:12.718069   21653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0313 23:44:12.718115   21653 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:44:12.732822   21653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
I0313 23:44:12.733231   21653 main.go:141] libmachine: () Calling .GetVersion
I0313 23:44:12.733662   21653 main.go:141] libmachine: Using API Version  1
I0313 23:44:12.733686   21653 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:44:12.733971   21653 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:44:12.734243   21653 main.go:141] libmachine: (functional-112122) Calling .DriverName
I0313 23:44:12.734463   21653 ssh_runner.go:195] Run: systemctl --version
I0313 23:44:12.734490   21653 main.go:141] libmachine: (functional-112122) Calling .GetSSHHostname
I0313 23:44:12.737300   21653 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:12.737671   21653 main.go:141] libmachine: (functional-112122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:68:bc", ip: ""} in network mk-functional-112122: {Iface:virbr1 ExpiryTime:2024-03-14 00:36:37 +0000 UTC Type:0 Mac:52:54:00:f0:68:bc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-112122 Clientid:01:52:54:00:f0:68:bc}
I0313 23:44:12.737706   21653 main.go:141] libmachine: (functional-112122) DBG | domain functional-112122 has defined IP address 192.168.39.224 and MAC address 52:54:00:f0:68:bc in network mk-functional-112122
I0313 23:44:12.737888   21653 main.go:141] libmachine: (functional-112122) Calling .GetSSHPort
I0313 23:44:12.738076   21653 main.go:141] libmachine: (functional-112122) Calling .GetSSHKeyPath
I0313 23:44:12.738238   21653 main.go:141] libmachine: (functional-112122) Calling .GetSSHUsername
I0313 23:44:12.738397   21653 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/functional-112122/id_rsa Username:docker}
I0313 23:44:12.830996   21653 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:44:12.993562   21653 main.go:141] libmachine: Making call to close driver server
I0313 23:44:12.993578   21653 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:12.993907   21653 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:12.993930   21653 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:12.993944   21653 main.go:141] libmachine: Making call to close driver server
I0313 23:44:12.993952   21653 main.go:141] libmachine: (functional-112122) Calling .Close
I0313 23:44:12.994143   21653 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:44:12.994155   21653 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:44:12.994222   21653 main.go:141] libmachine: (functional-112122) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.949585145s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-112122
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (25.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-112122 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-112122 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-v7pf7" [4339aba2-c8ee-4c21-b1cd-f3c24be961de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-v7pf7" [4339aba2-c8ee-4c21-b1cd-f3c24be961de] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 25.159740131s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (25.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr: (12.746485654s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr: (4.024120349s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.820404038s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-112122
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image load --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr: (7.232899701s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service list -o json
functional_test.go:1490: Took "554.003804ms" to run "out/minikube-linux-amd64 -p functional-112122 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.224:31097
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.224:31097
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image save gcr.io/google-containers/addon-resizer:functional-112122 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image save gcr.io/google-containers/addon-resizer:functional-112122 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.415873234s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "318.265259ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "56.207903ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "246.705934ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "67.461924ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image rm gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdany-port453507104/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710373446492051231" to /tmp/TestFunctionalparallelMountCmdany-port453507104/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710373446492051231" to /tmp/TestFunctionalparallelMountCmdany-port453507104/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710373446492051231" to /tmp/TestFunctionalparallelMountCmdany-port453507104/001/test-1710373446492051231
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.405844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 13 23:44 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 13 23:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 13 23:44 test-1710373446492051231
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh cat /mount-9p/test-1710373446492051231
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-112122 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [783f2c63-fcb3-44b7-a35d-5efcc05e6294] Pending
helpers_test.go:344: "busybox-mount" [783f2c63-fcb3-44b7-a35d-5efcc05e6294] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [783f2c63-fcb3-44b7-a35d-5efcc05e6294] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [783f2c63-fcb3-44b7-a35d-5efcc05e6294] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004973032s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-112122 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdany-port453507104/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.52600265s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-112122
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 image save --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-112122 image save --daemon gcr.io/google-containers/addon-resizer:functional-112122 --alsologtostderr: (1.058237391s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-112122
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdspecific-port3395162558/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.865599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdspecific-port3395162558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "sudo umount -f /mount-9p": exit status 1 (244.265376ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-112122 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdspecific-port3395162558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T" /mount1: exit status 1 (348.203952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-112122 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-112122 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-112122 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305222428/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-112122
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-112122
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-112122
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (317.34s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-504633 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0313 23:44:44.448346   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:45:12.137435   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0313 23:48:36.335434   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.340802   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.351126   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.371305   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.411928   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.492338   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.652872   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:36.973022   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:37.613648   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:38.894649   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:41.455971   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:46.576276   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:48:56.817199   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:49:17.298201   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0313 23:49:44.448977   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-504633 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m16.657982997s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (317.34s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (12.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- rollout status deployment/busybox
E0313 23:49:58.258927   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-504633 -- rollout status deployment/busybox: (10.31389226s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-dx92g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-prmkb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-zfjjt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-dx92g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-prmkb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-zfjjt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-dx92g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-prmkb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-zfjjt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (12.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-dx92g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-dx92g -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-prmkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-prmkb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-zfjjt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-504633 -- exec busybox-5b5d89c9d6-zfjjt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (48.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-504633 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-504633 -v=7 --alsologtostderr: (47.616882s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (48.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-504633 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (13.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp testdata/cp-test.txt ha-504633:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633:/home/docker/cp-test.txt ha-504633-m02:/home/docker/cp-test_ha-504633_ha-504633-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test_ha-504633_ha-504633-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633:/home/docker/cp-test.txt ha-504633-m03:/home/docker/cp-test_ha-504633_ha-504633-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test_ha-504633_ha-504633-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633:/home/docker/cp-test.txt ha-504633-m04:/home/docker/cp-test_ha-504633_ha-504633-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test_ha-504633_ha-504633-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp testdata/cp-test.txt ha-504633-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m02:/home/docker/cp-test.txt ha-504633:/home/docker/cp-test_ha-504633-m02_ha-504633.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test_ha-504633-m02_ha-504633.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m02:/home/docker/cp-test.txt ha-504633-m03:/home/docker/cp-test_ha-504633-m02_ha-504633-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test_ha-504633-m02_ha-504633-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m02:/home/docker/cp-test.txt ha-504633-m04:/home/docker/cp-test_ha-504633-m02_ha-504633-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test_ha-504633-m02_ha-504633-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp testdata/cp-test.txt ha-504633-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt ha-504633:/home/docker/cp-test_ha-504633-m03_ha-504633.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test_ha-504633-m03_ha-504633.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt ha-504633-m02:/home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test_ha-504633-m03_ha-504633-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m03:/home/docker/cp-test.txt ha-504633-m04:/home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test_ha-504633-m03_ha-504633-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp testdata/cp-test.txt ha-504633-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1259924449/001/cp-test_ha-504633-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt ha-504633:/home/docker/cp-test_ha-504633-m04_ha-504633.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633 "sudo cat /home/docker/cp-test_ha-504633-m04_ha-504633.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt ha-504633-m02:/home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m02 "sudo cat /home/docker/cp-test_ha-504633-m04_ha-504633-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 cp ha-504633-m04:/home/docker/cp-test.txt ha-504633-m03:/home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 ssh -n ha-504633-m03 "sudo cat /home/docker/cp-test_ha-504633-m04_ha-504633-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (13.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.482392161s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (334.72s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-504633 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0314 00:04:44.448632   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 00:04:59.381047   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:08:36.335236   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:09:44.449099   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-504633 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m33.926319338s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (334.72s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (73.3s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-504633 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-504633 --control-plane -v=7 --alsologtostderr: (1m12.452589869s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-504633 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (73.30s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-391140 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-391140 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.991874112s)
--- PASS: TestJSONOutput/start/Command (98.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-391140 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-391140 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-391140 --output=json --user=testUser
E0314 00:12:47.499248   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-391140 --output=json --user=testUser: (7.449357837s)
--- PASS: TestJSONOutput/stop/Command (7.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-756086 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-756086 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.597159ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eeb9e450-05f8-47bd-b43a-9f4174953db5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-756086] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5146fe2f-9c21-4d1e-8d2e-5ebbf767a41e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18375"}}
	{"specversion":"1.0","id":"718b4c8e-952c-4804-9f6d-21620322e5a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3c7f4a7-7ce0-471f-ae88-e1341d1204f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig"}}
	{"specversion":"1.0","id":"2177f054-d718-483c-b76c-30d608b4d6a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube"}}
	{"specversion":"1.0","id":"1cb9fe47-3e42-4dc3-a301-f066dda1e410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"147e3d26-7e85-4f03-a30c-c03df561b5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b5062a7-91ac-42c4-b665-9a3f4e0e747b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-756086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-756086
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (90.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-213831 --driver=kvm2  --container-runtime=crio
E0314 00:13:36.335816   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-213831 --driver=kvm2  --container-runtime=crio: (42.789722123s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-217429 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-217429 --driver=kvm2  --container-runtime=crio: (45.210569748s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-213831
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-217429
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-217429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-217429
helpers_test.go:175: Cleaning up "first-213831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-213831
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-213831: (1.017830901s)
--- PASS: TestMinikubeProfile (90.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-618434 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0314 00:14:44.448315   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-618434 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.690587778s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-618434 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-618434 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-633519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-633519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.590779385s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-618434 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-633519
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-633519: (1.33615341s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-633519
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-633519: (23.697680385s)
--- PASS: TestMountStart/serial/RestartStopped (24.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-633519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-507871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-507871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.777693823s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-507871 -- rollout status deployment/busybox: (3.636625875s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-498th -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-vrskm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-498th -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-vrskm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-498th -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-vrskm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-498th -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-498th -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-vrskm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-507871 -- exec busybox-5b5d89c9d6-vrskm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-507871 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-507871 -v 3 --alsologtostderr: (42.34822559s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-507871 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp testdata/cp-test.txt multinode-507871:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871:/home/docker/cp-test.txt multinode-507871-m02:/home/docker/cp-test_multinode-507871_multinode-507871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test_multinode-507871_multinode-507871-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871:/home/docker/cp-test.txt multinode-507871-m03:/home/docker/cp-test_multinode-507871_multinode-507871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test_multinode-507871_multinode-507871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp testdata/cp-test.txt multinode-507871-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt multinode-507871:/home/docker/cp-test_multinode-507871-m02_multinode-507871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test_multinode-507871-m02_multinode-507871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m02:/home/docker/cp-test.txt multinode-507871-m03:/home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test_multinode-507871-m02_multinode-507871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp testdata/cp-test.txt multinode-507871-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007186328/001/cp-test_multinode-507871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt multinode-507871:/home/docker/cp-test_multinode-507871-m03_multinode-507871.txt
E0314 00:18:36.335313   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871 "sudo cat /home/docker/cp-test_multinode-507871-m03_multinode-507871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 cp multinode-507871-m03:/home/docker/cp-test.txt multinode-507871-m02:/home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 ssh -n multinode-507871-m02 "sudo cat /home/docker/cp-test_multinode-507871-m03_multinode-507871-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-507871 node stop m03: (1.554637748s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-507871 status: exit status 7 (449.316885ms)

                                                
                                                
-- stdout --
	multinode-507871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-507871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-507871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr: exit status 7 (453.18775ms)

                                                
                                                
-- stdout --
	multinode-507871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-507871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-507871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:18:39.857945   38363 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:18:39.858049   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:18:39.858053   38363 out.go:304] Setting ErrFile to fd 2...
	I0314 00:18:39.858058   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:18:39.858228   38363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:18:39.858407   38363 out.go:298] Setting JSON to false
	I0314 00:18:39.858435   38363 mustload.go:65] Loading cluster: multinode-507871
	I0314 00:18:39.858478   38363 notify.go:220] Checking for updates...
	I0314 00:18:39.858791   38363 config.go:182] Loaded profile config "multinode-507871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:18:39.858809   38363 status.go:255] checking status of multinode-507871 ...
	I0314 00:18:39.859196   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:39.859251   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:39.875033   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0314 00:18:39.875444   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:39.876044   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:39.876089   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:39.876549   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:39.876801   38363 main.go:141] libmachine: (multinode-507871) Calling .GetState
	I0314 00:18:39.878445   38363 status.go:330] multinode-507871 host status = "Running" (err=<nil>)
	I0314 00:18:39.878462   38363 host.go:66] Checking if "multinode-507871" exists ...
	I0314 00:18:39.878758   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:39.878826   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:39.895576   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0314 00:18:39.895987   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:39.896482   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:39.896504   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:39.896816   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:39.897016   38363 main.go:141] libmachine: (multinode-507871) Calling .GetIP
	I0314 00:18:39.899928   38363 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:18:39.900362   38363 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:18:39.900391   38363 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:18:39.900474   38363 host.go:66] Checking if "multinode-507871" exists ...
	I0314 00:18:39.900768   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:39.900802   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:39.915998   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0314 00:18:39.916417   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:39.916874   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:39.916902   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:39.917255   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:39.917463   38363 main.go:141] libmachine: (multinode-507871) Calling .DriverName
	I0314 00:18:39.917717   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:18:39.917740   38363 main.go:141] libmachine: (multinode-507871) Calling .GetSSHHostname
	I0314 00:18:39.920929   38363 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:18:39.921382   38363 main.go:141] libmachine: (multinode-507871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a0:49", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:16:08 +0000 UTC Type:0 Mac:52:54:00:8b:a0:49 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-507871 Clientid:01:52:54:00:8b:a0:49}
	I0314 00:18:39.921414   38363 main.go:141] libmachine: (multinode-507871) DBG | domain multinode-507871 has defined IP address 192.168.39.60 and MAC address 52:54:00:8b:a0:49 in network mk-multinode-507871
	I0314 00:18:39.921566   38363 main.go:141] libmachine: (multinode-507871) Calling .GetSSHPort
	I0314 00:18:39.921777   38363 main.go:141] libmachine: (multinode-507871) Calling .GetSSHKeyPath
	I0314 00:18:39.921966   38363 main.go:141] libmachine: (multinode-507871) Calling .GetSSHUsername
	I0314 00:18:39.922091   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871/id_rsa Username:docker}
	I0314 00:18:40.007683   38363 ssh_runner.go:195] Run: systemctl --version
	I0314 00:18:40.015260   38363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:18:40.034634   38363 kubeconfig.go:125] found "multinode-507871" server: "https://192.168.39.60:8443"
	I0314 00:18:40.034664   38363 api_server.go:166] Checking apiserver status ...
	I0314 00:18:40.034706   38363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:18:40.052835   38363 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0314 00:18:40.065674   38363 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:18:40.065749   38363 ssh_runner.go:195] Run: ls
	I0314 00:18:40.071189   38363 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0314 00:18:40.075999   38363 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0314 00:18:40.076023   38363 status.go:422] multinode-507871 apiserver status = Running (err=<nil>)
	I0314 00:18:40.076035   38363 status.go:257] multinode-507871 status: &{Name:multinode-507871 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:18:40.076066   38363 status.go:255] checking status of multinode-507871-m02 ...
	I0314 00:18:40.076363   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:40.076407   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:40.091810   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0314 00:18:40.092308   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:40.092743   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:40.092773   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:40.093092   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:40.093292   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetState
	I0314 00:18:40.095086   38363 status.go:330] multinode-507871-m02 host status = "Running" (err=<nil>)
	I0314 00:18:40.095105   38363 host.go:66] Checking if "multinode-507871-m02" exists ...
	I0314 00:18:40.095531   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:40.095573   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:40.111250   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0314 00:18:40.111724   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:40.112263   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:40.112288   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:40.112569   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:40.112778   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetIP
	I0314 00:18:40.115674   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | domain multinode-507871-m02 has defined MAC address 52:54:00:b2:2a:12 in network mk-multinode-507871
	I0314 00:18:40.116160   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:2a:12", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:17:14 +0000 UTC Type:0 Mac:52:54:00:b2:2a:12 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-507871-m02 Clientid:01:52:54:00:b2:2a:12}
	I0314 00:18:40.116183   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | domain multinode-507871-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:b2:2a:12 in network mk-multinode-507871
	I0314 00:18:40.116353   38363 host.go:66] Checking if "multinode-507871-m02" exists ...
	I0314 00:18:40.116638   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:40.116673   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:40.132339   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0314 00:18:40.132774   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:40.133314   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:40.133350   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:40.133718   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:40.133966   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .DriverName
	I0314 00:18:40.134187   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:18:40.134211   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetSSHHostname
	I0314 00:18:40.137528   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | domain multinode-507871-m02 has defined MAC address 52:54:00:b2:2a:12 in network mk-multinode-507871
	I0314 00:18:40.137994   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:2a:12", ip: ""} in network mk-multinode-507871: {Iface:virbr1 ExpiryTime:2024-03-14 01:17:14 +0000 UTC Type:0 Mac:52:54:00:b2:2a:12 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-507871-m02 Clientid:01:52:54:00:b2:2a:12}
	I0314 00:18:40.138024   38363 main.go:141] libmachine: (multinode-507871-m02) DBG | domain multinode-507871-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:b2:2a:12 in network mk-multinode-507871
	I0314 00:18:40.138209   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetSSHPort
	I0314 00:18:40.138405   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetSSHKeyPath
	I0314 00:18:40.138566   38363 main.go:141] libmachine: (multinode-507871-m02) Calling .GetSSHUsername
	I0314 00:18:40.138712   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4912/.minikube/machines/multinode-507871-m02/id_rsa Username:docker}
	I0314 00:18:40.218712   38363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:18:40.235670   38363 status.go:257] multinode-507871-m02 status: &{Name:multinode-507871-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:18:40.235712   38363 status.go:255] checking status of multinode-507871-m03 ...
	I0314 00:18:40.236060   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 00:18:40.236105   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:18:40.251064   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33431
	I0314 00:18:40.251522   38363 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:18:40.252183   38363 main.go:141] libmachine: Using API Version  1
	I0314 00:18:40.252209   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:18:40.252590   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:18:40.252762   38363 main.go:141] libmachine: (multinode-507871-m03) Calling .GetState
	I0314 00:18:40.254446   38363 status.go:330] multinode-507871-m03 host status = "Stopped" (err=<nil>)
	I0314 00:18:40.254471   38363 status.go:343] host is not running, skipping remaining checks
	I0314 00:18:40.254476   38363 status.go:257] multinode-507871-m03 status: &{Name:multinode-507871-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-507871 node start m03 -v=7 --alsologtostderr: (32.611115748s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-507871 node delete m03: (1.903011s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (170.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-507871 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0314 00:28:36.335944   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
E0314 00:29:27.499924   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-507871 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m49.625860147s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-507871 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (170.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-507871
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-507871-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-507871-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.324728ms)

                                                
                                                
-- stdout --
	* [multinode-507871-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-507871-m02' is duplicated with machine name 'multinode-507871-m02' in profile 'multinode-507871'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-507871-m03 --driver=kvm2  --container-runtime=crio
E0314 00:29:44.448261   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-507871-m03 --driver=kvm2  --container-runtime=crio: (46.09353707s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-507871
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-507871: exit status 80 (226.570557ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-507871 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-507871-m03 already exists in multinode-507871-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-507871-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.44s)

                                                
                                    
x
+
TestScheduledStopUnix (116.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-512463 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-512463 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.564325331s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-512463 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-512463 -n scheduled-stop-512463
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-512463 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-512463 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-512463 -n scheduled-stop-512463
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-512463
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-512463 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-512463
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-512463: exit status 7 (75.223897ms)

                                                
                                                
-- stdout --
	scheduled-stop-512463
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-512463 -n scheduled-stop-512463
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-512463 -n scheduled-stop-512463: exit status 7 (74.376643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-512463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-512463
--- PASS: TestScheduledStopUnix (116.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.641441520 start -p running-upgrade-863544 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.641441520 start -p running-upgrade-863544 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m36.253703405s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-863544 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-863544 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.503729033s)
helpers_test.go:175: Cleaning up "running-upgrade-863544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-863544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-863544: (1.162734584s)
--- PASS: TestRunningBinaryUpgrade (194.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (198.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3756935552 start -p stopped-upgrade-848457 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3756935552 start -p stopped-upgrade-848457 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.436473605s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3756935552 -p stopped-upgrade-848457 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3756935552 -p stopped-upgrade-848457 stop: (2.312354796s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-848457 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0314 00:39:44.448421   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-848457 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.606560926s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (198.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-326260 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-326260 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.101413ms)

                                                
                                                
-- stdout --
	* [false-326260] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:37:28.845342   44723 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:37:28.845494   44723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:37:28.845507   44723 out.go:304] Setting ErrFile to fd 2...
	I0314 00:37:28.845517   44723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:37:28.845830   44723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4912/.minikube/bin
	I0314 00:37:28.846595   44723 out.go:298] Setting JSON to false
	I0314 00:37:28.847786   44723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4792,"bootTime":1710371857,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:37:28.847877   44723 start.go:139] virtualization: kvm guest
	I0314 00:37:28.850090   44723 out.go:177] * [false-326260] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:37:28.851557   44723 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:37:28.851532   44723 notify.go:220] Checking for updates...
	I0314 00:37:28.852775   44723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:37:28.854200   44723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	I0314 00:37:28.855397   44723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	I0314 00:37:28.856516   44723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:37:28.857688   44723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:37:28.859411   44723 config.go:182] Loaded profile config "offline-crio-820136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 00:37:28.859533   44723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:37:28.892529   44723 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:37:28.893790   44723 start.go:297] selected driver: kvm2
	I0314 00:37:28.893807   44723 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:37:28.893835   44723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:37:28.896060   44723 out.go:177] 
	W0314 00:37:28.897298   44723 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0314 00:37:28.898416   44723 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-326260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-326260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-326260"

                                                
                                                
----------------------- debugLogs end: false-326260 [took: 3.207970152s] --------------------------------
helpers_test.go:175: Cleaning up "false-326260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-326260
--- PASS: TestNetworkPlugins/group/false (3.47s)

                                                
                                    
x
+
TestPause/serial/Start (127.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-501107 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-501107 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m7.187143551s)
--- PASS: TestPause/serial/Start (127.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-848457
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-848457: (1.005380417s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (118.161165ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-576005] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (72.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-576005 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-576005 --driver=kvm2  --container-runtime=crio: (1m11.989292391s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-576005 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (72.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.238409085s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-576005 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-576005 status -o json: exit status 2 (238.793403ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-576005","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-576005
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-576005: (1.048846123s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-576005 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.033283579s)
--- PASS: TestNoKubernetes/serial/Start (29.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-576005 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-576005 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.449045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-576005
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-576005: (1.719895482s)
--- PASS: TestNoKubernetes/serial/Stop (1.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-576005 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-576005 --driver=kvm2  --container-runtime=crio: (39.675119184s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-576005 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-576005 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.174323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.313393827s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0314 00:43:36.335997   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m35.235002948s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pt5xz" [2812d8bb-de33-4dd1-b3a8-8447b2fc2cfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pt5xz" [2812d8bb-de33-4dd1-b3a8-8447b2fc2cfa] Running
E0314 00:44:44.448604   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005822791s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.049839728s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-326260 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-326260 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.206627001s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n548l" [33d12872-445c-4a1e-ad23-080807de2260] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00433197s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vblxr" [4ac268b5-ad81-4dbb-a149-48231b6a71af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vblxr" [4ac268b5-ad81-4dbb-a149-48231b6a71af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.026893728s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.992561171s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (135.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0314 00:46:07.500395   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m15.091230319s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (135.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xfp9r" [7cfc3d45-f7b8-46c1-9ec8-f5d2d72b8f2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.011118913s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qw9fl" [1fd6aa9b-9952-4264-86a5-5a712fa1225e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qw9fl" [1fd6aa9b-9952-4264-86a5-5a712fa1225e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005925673s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dj68t" [a2fd339e-7028-4f22-8fef-a977b9057034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dj68t" [a2fd339e-7028-4f22-8fef-a977b9057034] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005276473s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.874441068s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (127.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-326260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m7.972250731s)
--- PASS: TestNetworkPlugins/group/bridge/Start (127.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p2t6m" [949a46f1-7669-4df4-9a4f-90e79a46e201] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p2t6m" [949a46f1-7669-4df4-9a4f-90e79a46e201] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.005329914s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cr4rg" [7962701b-16ce-45ab-b1e2-3c5e14267afa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0077851s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (146.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-585806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-585806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m26.394268253s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (146.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2qqcg" [55e43df3-aa5c-4df3-a217-aaeae929adc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2qqcg" [55e43df3-aa5c-4df3-a217-aaeae929adc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004093398s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-164135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-164135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m9.004024574s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-326260 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-326260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rp4nh" [e77ff03f-8008-4f39-af23-818389117155] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rp4nh" [e77ff03f-8008-4f39-af23-818389117155] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005328457s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-326260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-326260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652215 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0314 00:49:35.522797   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.528085   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.538377   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.558675   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.599025   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.679498   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:35.839720   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:36.160123   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:36.800731   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:38.081691   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:40.642547   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:44.448729   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/addons-524943/client.crt: no such file or directory
E0314 00:49:45.763136   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:55.862259   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:55.867667   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:55.877972   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:55.898282   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:55.938854   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:56.003806   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/auto-326260/client.crt: no such file or directory
E0314 00:49:56.019400   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:56.180490   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:56.501496   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:57.142383   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:49:58.423130   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652215 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m39.108607895s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164135 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b24e199-4e82-4c69-bb1f-11fb49d244fe] Pending
helpers_test.go:344: "busybox" [7b24e199-4e82-4c69-bb1f-11fb49d244fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0314 00:50:00.983987   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
E0314 00:50:06.104239   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7b24e199-4e82-4c69-bb1f-11fb49d244fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004657185s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164135 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-164135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-164135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117063536s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-164135 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-585806 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1dfd2648-2774-42e2-8674-f4f1b8cc2856] Pending
helpers_test.go:344: "busybox" [1dfd2648-2774-42e2-8674-f4f1b8cc2856] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1dfd2648-2774-42e2-8674-f4f1b8cc2856] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00487706s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-585806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-585806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-585806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15df755f-762d-4797-8c90-09e96eb32663] Pending
helpers_test.go:344: "busybox" [15df755f-762d-4797-8c90-09e96eb32663] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0314 00:51:11.745446   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:11.750751   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:11.761054   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:11.781366   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:11.821713   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:11.902139   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:12.062742   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
helpers_test.go:344: "busybox" [15df755f-762d-4797-8c90-09e96eb32663] Running
E0314 00:51:12.383642   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:13.023883   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:14.304379   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
E0314 00:51:16.865508   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004255298s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-652215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0314 00:51:17.786041   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/kindnet-326260/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-652215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053200656s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-652215 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (643.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-164135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0314 00:52:46.695500   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:46.700794   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:46.711120   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:46.731463   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:46.771841   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:46.852239   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:47.012844   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:47.333093   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:47.974219   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:49.254821   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:51.815501   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
E0314 00:52:56.936399   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-164135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m42.865448818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-164135 -n embed-certs-164135
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (643.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (545.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-585806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0314 00:53:36.158443   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:53:36.335830   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-585806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m5.57021769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-585806 -n no-preload-585806
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (545.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (532.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652215 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0314 00:53:55.588832   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/calico-326260/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652215 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m52.668518423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652215 -n default-k8s-diff-port-652215
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (532.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-004791 --alsologtostderr -v=3
E0314 00:53:56.639039   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
E0314 00:54:00.714259   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:00.719520   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:00.729802   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:00.750102   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:00.790434   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:00.870978   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:01.031404   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:01.352000   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
E0314 00:54:01.992866   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/bridge-326260/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-004791 --alsologtostderr -v=3: (6.322528608s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-004791 -n old-k8s-version-004791: exit status 7 (75.32427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-004791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-970859 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0314 01:17:46.695518   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/enable-default-cni-326260/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-970859 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (56.318196917s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-970859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-970859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08835907s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-970859 --alsologtostderr -v=3
E0314 01:18:15.675853   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/flannel-326260/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-970859 --alsologtostderr -v=3: (10.387050324s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-970859 -n newest-cni-970859
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-970859 -n newest-cni-970859: exit status 7 (77.285515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-970859 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-970859 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0314 01:18:36.335262   12268 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4912/.minikube/profiles/functional-112122/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-970859 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (37.101809986s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-970859 -n newest-cni-970859
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-970859 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-970859 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-970859 -n newest-cni-970859
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-970859 -n newest-cni-970859: exit status 2 (254.54017ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-970859 -n newest-cni-970859
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-970859 -n newest-cni-970859: exit status 2 (252.611688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-970859 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-970859 -n newest-cni-970859
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-970859 -n newest-cni-970859
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 3.4
270 TestNetworkPlugins/group/cilium 3.72
278 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-326260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-326260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-326260"

                                                
                                                
----------------------- debugLogs end: kubenet-326260 [took: 3.259634785s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-326260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-326260
--- SKIP: TestNetworkPlugins/group/kubenet (3.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-326260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-326260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-326260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-326260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-326260"

                                                
                                                
----------------------- debugLogs end: cilium-326260 [took: 3.561458205s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-326260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-326260
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-573365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-573365
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard